Search results for: Direct Methods
3494 Removal of Pharmaceutical Compounds by a Sequential Treatment of Ozonation Followed by Fenton Process: Influence of the Water Matrix
Authors: Almudena Aguinaco, Olga Gimeno, Fernando J. Beltrán, Juan José P. Sagasti
Abstract:
A sequential treatment of ozonation followed by a Fenton or photo-Fenton process, using black light lamps (365 nm) in this latter case, has been applied to remove a mixture of pharmaceutical compounds and the generated by-products both in ultrapure and secondary treated wastewater. The scientifictechnological innovation of this study stems from the in situ generation of hydrogen peroxide from the direct ozonation of pharmaceuticals, and can later be used in the application of Fenton and photo-Fenton processes. The compounds selected as models were sulfamethoxazol and acetaminophen. It should be remarked that the use of a second process is necessary as a result of the low mineralization yield reached by the exclusive application of ozone. Therefore, the influence of the water matrix has been studied in terms of hydrogen peroxide concentration, individual compound concentration and total organic carbon removed. Moreover, the concentration of different iron species in solution has been measured.Keywords: Fenton, photo-Fenton, ozone, pharmaceutical compounds, hydrogen peroxide, water treatment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18443493 Comparative Study of Transformed and Concealed Data in Experimental Designs and Analyses
Authors: K. Chinda, P. Luangpaiboon
Abstract:
This paper presents the comparative study of coded data methods for finding the benefit of concealing the natural data which is the mercantile secret. Influential parameters of the number of replicates (rep), treatment effects (τ) and standard deviation (σ) against the efficiency of each transformation method are investigated. The experimental data are generated via computer simulations under the specified condition of the process with the completely randomized design (CRD). Three ways of data transformation consist of Box-Cox, arcsine and logit methods. The difference values of F statistic between coded data and natural data (Fc-Fn) and hypothesis testing results were determined. The experimental results indicate that the Box-Cox results are significantly different from natural data in cases of smaller levels of replicates and seem to be improper when the parameter of minus lambda has been assigned. On the other hand, arcsine and logit transformations are more robust and obviously, provide more precise numerical results. In addition, the alternate ways to select the lambda in the power transformation are also offered to achieve much more appropriate outcomes.Keywords: Experimental Designs, Box-Cox, Arcsine, Logit Transformations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16223492 Context Modeling and Context-Aware Service Adaptation for Pervasive Computing Systems
Authors: Moeiz Miraoui, Chakib Tadj, Chokri ben Amar
Abstract:
Devices in a pervasive computing system (PCS) are characterized by their context-awareness. It permits them to provide proactively adapted services to the user and applications. To do so, context must be well understood and modeled in an appropriate form which enhance its sharing between devices and provide a high level of abstraction. The most interesting methods for modeling context are those based on ontology however the majority of the proposed methods fail in proposing a generic ontology for context which limit their usability and keep them specific to a particular domain. The adaptation task must be done automatically and without an explicit intervention of the user. Devices of a PCS must acquire some intelligence which permits them to sense the current context and trigger the appropriate service or provide a service in a better suitable form. In this paper we will propose a generic service ontology for context modeling and a context-aware service adaptation based on a service oriented definition of context.
Keywords: Pervasive computing system, context, contextawareness, service, context modeling, ontology, adaptation, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18153491 Phase Equilibrium of Volatile Organic Compounds in Polymeric Solvents Using Group Contribution Methods
Authors: E. Muzenda
Abstract:
Group contribution methods such as the UNIFAC are of major interest to researchers and engineers involved synthesis, feasibility studies, design and optimization of separation processes as well as other applications of industrial use. Reliable knowledge of the phase equilibrium behavior is crucial for the prediction of the fate of the chemical in the environment and other applications. The objective of this study was to predict the solubility of selected volatile organic compounds (VOCs) in glycol polymers and biodiesel. Measurements can be expensive and time consuming, hence the need for thermodynamic models. The results obtained in this study for the infinite dilution activity coefficients compare very well those published in literature obtained through measurements. It is suggested that in preliminary design or feasibility studies of absorption systems for the abatement of volatile organic compounds, prediction procedures should be implemented while accurate fluid phase equilibrium data should be obtained from experiment.Keywords: Volatile organic compounds, Prediction, Phaseequilibrium, Environmental, Infinite dilution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20263490 A Model to Determine Atmospheric Stability and its Correlation with CO Concentration
Authors: Kh. Ashrafi, Gh. A. Hoshyaripour
Abstract:
Atmospheric stability plays the most important role in the transport and dispersion of air pollutants. Different methods are used for stability determination with varying degrees of complexity. Most of these methods are based on the relative magnitude of convective and mechanical turbulence in atmospheric motions. Richardson number, Monin-Obukhov length, Pasquill-Gifford stability classification and Pasquill–Turner stability classification, are the most common parameters and methods. The Pasquill–Turner Method (PTM), which is employed in this study, makes use of observations of wind speed, insolation and the time of day to classify atmospheric stability with distinguishable indices. In this study, a model is presented to determination of atmospheric stability conditions using PTM. As a case study, meteorological data of Mehrabad station in Tehran from 2000 to 2005 is applied to model. Here, three different categories are considered to deduce the pattern of stability conditions. First, the total pattern of stability classification is obtained and results show that atmosphere is 38.77%, 27.26%, 33.97%, at stable, neutral and unstable condition, respectively. It is also observed that days are mostly unstable (66.50%) while nights are mostly stable (72.55%). Second, monthly and seasonal patterns are derived and results indicate that relative frequency of stable conditions decrease during January to June and increase during June to December, while results for unstable conditions are exactly in opposite manner. Autumn is the most stable season with relative frequency of 50.69% for stable condition, whilst, it is 42.79%, 34.38% and 27.08% for winter, summer and spring, respectively. Hourly stability pattern is the third category that points out that unstable condition is dominant from approximately 03-15 GTM and 04-12 GTM for warm and cold seasons, respectively. Finally, correlation between atmospheric stability and CO concentration is achieved.Keywords: Atmospheric stability, Pasquill-Turner classification, convective turbulence, mechanical turbulence, Tehran.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64533489 The Linkage of Urban and Energy Planning for Sustainable Cities: The Case of Denmark and Germany
Authors: Jens-Phillip Petersen
Abstract:
The reduction of GHG emissions in buildings is a focus area of national energy policies in Europe, because buildings are responsible for a major share of the final energy consumption. It is at local scale where policies to increase the share of renewable energies and energy efficiency measures get implemented. Municipalities, as local authorities and responsible entity for land-use planning, have a direct influence on urban patterns and energy use, which makes them key actors in the transition towards sustainable cities. Hence, synchronizing urban planning with energy planning offers great potential to increase society’s energy-efficiency; this has a high significance to reach GHG-reduction targets. In this paper, the actual linkage of urban planning and energy planning in Denmark and Germany was assessed; substantive barriers preventing their integration and driving factors that lead to successful transitions towards a holistic urban energy planning procedures were identified.Keywords: Energy planning, urban planning, renewable energies, sustainable cities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16983488 BIM Application Research Based on the Main Entrance and Garden Area Project of Shanghai Disneyland
Authors: Ying Yuken, Pengfei Wang, Zhang Qilin, Xiao Ben
Abstract:
Based on the main entrance and garden area (ME&G) project of Shanghai Disneyland, this paper introduces the application of BIM technology in this kind of low-rise comprehensive building with complex facade system, electromechanical system and decoration system. BIM technology is applied to the whole process of design, construction and completion of the whole project. With the construction of BIM application framework of the whole project, the key points of BIM modeling methods of different systems and the integration and coordination of BIM models are elaborated in detail. The specific application methods of BIM technology in similar complex low-rise building projects are sorted out. Finally, the paper summarizes the benefits of BIM technology application, and puts forward some suggestions for BIM management mode and practical application of similar projects in the future.Keywords: BIM, complex low-rise building, BIM modeling, model integration and coordination, 3D scanning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10663487 Natural Emergence of a Core Structure in Networks via Clique Percolation
Authors: A. Melka, N. Slater, A. Mualem, Y. Louzoun
Abstract:
Networks are often presented as containing a “core” and a “periphery.” The existence of a core suggests that some vertices are central and form the skeleton of the network, to which all other vertices are connected. An alternative view of graphs is through communities. Multiple measures have been proposed for dense communities in graphs, the most classical being k-cliques, k-cores, and k-plexes, all presenting groups of tightly connected vertices. We here show that the edge number thresholds for such communities to emerge and for their percolation into a single dense connectivity component are very close, in all networks studied. These percolating cliques produce a natural core and periphery structure. This result is generic and is tested in configuration models and in real-world networks. This is also true for k-cores and k-plexes. Thus, the emergence of this connectedness among communities leading to a core is not dependent on some specific mechanism but a direct result of the natural percolation of dense communities.Keywords: Networks, cliques, percolation, core structure, phase transition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7643486 Application of Post-Stack and Pre-Stack Seismic Inversion for Prediction of Hydrocarbon Reservoirs in a Persian Gulf Gas Field
Authors: Nastaran Moosavi, Mohammad Mokhtari
Abstract:
Seismic inversion is a technique which has been in use for years and its main goal is to estimate and to model physical characteristics of rocks and fluids. Generally, it is a combination of seismic and well-log data. Seismic inversion can be carried out through different methods; we have conducted and compared post-stack and pre- stack seismic inversion methods on real data in one of the fields in the Persian Gulf. Pre-stack seismic inversion can transform seismic data to rock physics such as P-impedance, S-impedance and density. While post- stack seismic inversion can just estimate P-impedance. Then these parameters can be used in reservoir identification. Based on the results of inverting seismic data, a gas reservoir was detected in one of Hydrocarbon oil fields in south of Iran (Persian Gulf). By comparing post stack and pre-stack seismic inversion it can be concluded that the pre-stack seismic inversion provides a more reliable and detailed information for identification and prediction of hydrocarbon reservoirs.Keywords: Density, P-impedance, S-impedance, post-stack seismic inversion, pre-stack seismic inversion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22283485 Kinetic Parameter Estimation from Thermogravimetry and Microscale Combustion Calorimetry
Authors: Rhoda Afriyie Mensah, Lin Jiang, Solomon Asante-Okyere, Xu Qiang, Cong Jin
Abstract:
Flammability analysis of extruded polystyrene (XPS) has become crucial due to its utilization as insulation material for energy efficient buildings. Using the Kissinger-Akahira-Sunose and Flynn-Wall-Ozawa methods, the degradation kinetics of two pure XPS from the local market, red and grey ones, were obtained from the results of thermogravity analysis (TG) and microscale combustion calorimetry (MCC) experiments performed under the same heating rates. From the experiments, it was discovered that red XPS released more heat than grey XPS and both materials showed two mass loss stages. Consequently, the kinetic parameters for red XPS were higher than grey XPS. A comparative evaluation of activation energies from MCC and TG showed an insignificant degree of deviation signifying an equivalent apparent activation energy from both methods. However, different activation energy profiles as a result of the different chemical pathways were presented when the dependencies of the activation energies on extent of conversion for TG and MCC were compared.
Keywords: Flammability, microscale combustion calorimetry, thermogravity analysis, thermal degradation, kinetic analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8843484 Face Recognition Based On Vector Quantization Using Fuzzy Neuro Clustering
Authors: Elizabeth B. Varghese, M. Wilscy
Abstract:
A face recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame. A lot of algorithms have been proposed for face recognition. Vector Quantization (VQ) based face recognition is a novel approach for face recognition. Here a new codebook generation for VQ based face recognition using Integrated Adaptive Fuzzy Clustering (IAFC) is proposed. IAFC is a fuzzy neural network which incorporates a fuzzy learning rule into a competitive neural network. The performance of proposed algorithm is demonstrated by using publicly available AT&T database, Yale database, Indian Face database and a small face database, DCSKU database created in our lab. In all the databases the proposed approach got a higher recognition rate than most of the existing methods. In terms of Equal Error Rate (ERR) also the proposed codebook is better than the existing methods.
Keywords: Face Recognition, Vector Quantization, Integrated Adaptive Fuzzy Clustering, Self Organization Map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22413483 Optimization of Transmission Lines Loading in TNEP Using Decimal Codification Based GA
Authors: H. Shayeghi, M. Mahdavi
Abstract:
Transmission network expansion planning (TNEP) is a basic part of power system planning that determines where, when and how many new transmission lines should be added to the network. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, lines adequacy rate has not been considered at the end of planning horizon, i.e., expanded network misses adequacy after some times and needs to be expanded again. In this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using genetic algorithm. Expanded network will possess a maximum adequacy to provide load demand and also the transmission lines overloaded later. Finally, adequacy index could be defined and used to compare some designs that have different investment costs and adequacy rates. In this paper, the proposed idea has been tested on the Garvers network. The results show that the network will possess maximum efficiency economically.
Keywords: Adequacy Optimization, Transmission Expansion Planning, DCGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18113482 Methods for Distinction of Cattle Using Supervised Learning
Authors: Radoslav Židek, Veronika Šidlová, Radovan Kasarda, Birgit Fuerst-Waltl
Abstract:
Machine learning represents a set of topics dealing with the creation and evaluation of algorithms that facilitate pattern recognition, classification, and prediction, based on models derived from existing data. The data can present identification patterns which are used to classify into groups. The result of the analysis is the pattern which can be used for identification of data set without the need to obtain input data used for creation of this pattern. An important requirement in this process is careful data preparation validation of model used and its suitable interpretation. For breeders, it is important to know the origin of animals from the point of the genetic diversity. In case of missing pedigree information, other methods can be used for traceability of animal´s origin. Genetic diversity written in genetic data is holding relatively useful information to identify animals originated from individual countries. We can conclude that the application of data mining for molecular genetic data using supervised learning is an appropriate tool for hypothesis testing and identifying an individual.
Keywords: Genetic data, Pinzgau cattle, supervised learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23183481 Determination of Some Biochemical Parameters in Women during the First Trimester of Pregnancy (Normal Pregnancy and Missed Miscarriage)
Authors: Yahia M., Chaoui N., Chaouch A., Massinissa Yahia
Abstract:
Our study was designed to determine the metabolic changes of some biochemical parameters (cholesterol, triglyceride, Iron, uric acid, Urea and folic acid) and highlight their changes in 57 women of the region Batna, during the first trimester of pregnancy. This practical work was done with 27 women with missed miscarriage, compared with 30 control subjects of normal pregnant women. The assay results revealed a highly significant difference (P = 0.0006) between the two groups in serum iron (64.00 vs 93.54) and in the rate of folate (6.70 vs 9.22) (P <0.001) but no difference was found regarding the rate of Ca (9.69 vs 10.20), urea (0.19 vs 0.17), UA (33.96 vs 32.76), CH (1.283 vs 1.431), and TG (0.8852 vs 0.8290). The present study indicates that iron deficiency and folate are associated with missed miscarriage, but no direct pathophysiological link has been determined. Further in-depth studies are needed to determine the exact mechanism by which these deficits lead to a missed miscarriage.
Keywords: Biochemical parameters, pregnant women, missed miscarriage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21513480 Adaption Model for Building Agile Pronunciation Dictionaries Using Phonemic Distance Measurements
Authors: Akella Amarendra Babu, Rama Devi Yellasiri, Natukula Sainath
Abstract:
Where human beings can easily learn and adopt pronunciation variations, machines need training before put into use. Also humans keep minimum vocabulary and their pronunciation variations are stored in front-end of their memory for ready reference, while machines keep the entire pronunciation dictionary for ready reference. Supervised methods are used for preparation of pronunciation dictionaries which take large amounts of manual effort, cost, time and are not suitable for real time use. This paper presents an unsupervised adaptation model for building agile and dynamic pronunciation dictionaries online. These methods mimic human approach in learning the new pronunciations in real time. A new algorithm for measuring sound distances called Dynamic Phone Warping is presented and tested. Performance of the system is measured using an adaptation model and the precision metrics is found to be better than 86 percent.Keywords: Pronunciation variations, dynamic programming, machine learning, natural language processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8003479 Exploring the Potential of Phase Change Memories as an Alternative to DRAM Technology
Authors: Venkataraman Krishnaswami, Venkatasubramanian Viswanathan
Abstract:
Scalability poses a severe threat to the existing DRAM technology. The capacitors that are used for storing and sensing charge in DRAM are generally not scaled beyond 42nm. This is because; the capacitors must be sufficiently large for reliable sensing and charge storage mechanism. This leaves DRAM memory scaling in jeopardy, as charge sensing and storage mechanisms become extremely difficult. In this paper we provide an overview of the potential and the possibilities of using Phase Change Memory (PCM) as an alternative for the existing DRAM technology. The main challenges that we encounter in using PCM are, the limited endurance, high access latencies, and higher dynamic energy consumption than that of the conventional DRAM. We then provide an overview of various methods, which can be employed to overcome these drawbacks. Hybrid memories involving both PCM and DRAM can be used, to achieve good tradeoffs in access latency and storage density. We conclude by presenting, the results of these methods that makes PCM a potential replacement for the current DRAM technology.Keywords: DRAM, Phase Change Memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19953478 A Simplified Single Correlator Rake Receiver for CDMA Communications
Authors: K. Murali Krishna, Abhijit Mitra, C. Ardil
Abstract:
This paper presents a single correlator RAKE receiver for direct sequence code division multiple access (DS-CDMA) systems. In conventional RAKE receivers, multiple correlators are used to despread the multipath signals and then to align and combine those signals in a later stage before making a bit decision. The simplified receiver structure presented here uses a single correlator and single code sequence generator to recover the multipaths. Modified Walsh- Hadamard codes are used here for data spreading that provides better uncorrelation properties for the multipath signals. The main advantage of this receiver structure is that it requires only a single correlator and a code generator in contrary to the conventional RAKE receiver concept with multiple correlators. It is shown in results that the proposed receiver achieves better bit error rates in comparison with the conventional one for more than one multipaths.
Keywords: RAKE receiver, Code division multiple access, ModifiedWalsh-Hadamard codes, Single correlator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36433477 Adaptive Noise Reduction Algorithm for Speech Enhancement
Authors: M. Kalamani, S. Valarmathy, M. Krishnamoorthi
Abstract:
In this paper, Least Mean Square (LMS) adaptive noise reduction algorithm is proposed to enhance the speech signal from the noisy speech. In this, the speech signal is enhanced by varying the step size as the function of the input signal. Objective and subjective measures are made under various noises for the proposed and existing algorithms. From the experimental results, it is seen that the proposed LMS adaptive noise reduction algorithm reduces Mean square Error (MSE) and Log Spectral Distance (LSD) as compared to that of the earlier methods under various noise conditions with different input SNR levels. In addition, the proposed algorithm increases the Peak Signal to Noise Ratio (PSNR) and Segmental SNR improvement (ΔSNRseg) values; improves the Mean Opinion Score (MOS) as compared to that of the various existing LMS adaptive noise reduction algorithms. From these experimental results, it is observed that the proposed LMS adaptive noise reduction algorithm reduces the speech distortion and residual noise as compared to that of the existing methods.
Keywords: LMS, speech enhancement, speech quality, residual noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28053476 Induction Motor Speed Control Using Fuzzy Logic Controller
Authors: V. Chitra, R. S. Prabhakar
Abstract:
Because of the low maintenance and robustness induction motors have many applications in the industries. The speed control of induction motor is more important to achieve maximum torque and efficiency. Various speed control techniques like, Direct Torque Control, Sensorless Vector Control and Field Oriented Control are discussed in this paper. Soft computing technique – Fuzzy logic is applied in this paper for the speed control of induction motor to achieve maximum torque with minimum loss. The fuzzy logic controller is implemented using the Field Oriented Control technique as it provides better control of motor torque with high dynamic performance. The motor model is designed and membership functions are chosen according to the parameters of the motor model. The simulated design is tested using various tool boxes in MATLAB. The result concludes that the efficiency and reliability of the proposed speed controller is good.
Keywords: Induction motor, Field Oriented Control, Fuzzy logic controller, Maximum torque, Membership function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32353475 Speech Intelligibility Improvement Using Variable Level Decomposition DWT
Authors: Samba Raju, Chiluveru, Manoj Tripathy
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methodsKeywords: Discrete Wavelet Transform, speech intelligibility, STOI, standard deviation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6933474 Parenting Styles and Their Relation to Videogame Addiction
Authors: Petr Květon, Martin Jelínek
Abstract:
We try to identify the role of various aspects of parenting style in the phenomenon of videogame playing addiction. Relevant self-report questionnaires were part of a wider set of methods focused on the constructs related to videogame playing. The battery of methods was administered in school settings in paper and pencil form. The research sample consisted of 333 (166 males, 167 females) elementary and high school students at the age between 10 and 19 years (m=14.98, sd=1.77). Using stepwise regression analysis, we assessed the influence of demographic variables (gender and age) and parenting styles. Age and gender together explained 26.3% of game addiction variance (F(2,330)=58.81, p<.01). By adding four aspect of parenting styles (inconsistency, involvement, control, and warmth) another 10.2% of variance was explained (∆F(4,326)=13.09, p<.01). The significant predictor was gender of the respondent, where males scored higher on game addiction scale (B=0.70, p<.01), age (β=-0.18, p<.01), where younger children showed higher level of addiction, and parental inconsistency (β=0.30, p<.01), where the higher the inconsistency in upbringing, the more developed game playing addiction.
Keywords: Gender, parenting styles, video games, addiction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27383473 Competitive Advantage Effecting Firm Performance: Case Study of Small and Medium Enterprises in Thailand
Authors: Somdech Rungsrisawas
Abstract:
The objectives of this study are to examine the relationship between the competitive advantage of small and medium enterprises (SMEs) and their overall performance. A mixed method has been applied to identify the effect of determinants toward competitive advantage. The sample is composed of SMEs in product and service businesses. The study has been tested at an organizational level with samples of SME entrepreneurs, business successors, and board of directors or management team. Quantitative analysis has been conducted through multiple regression analysis with 400 samples. The findings illustrate that each aspect of competitive advantage needs a different set of driving factors to explain either the direct or the indirect effect on firm performance. Interestingly, technological capability is a perfect mediator and interorganizational cooperation toward competitive advantage. In addition, differentiation is difficult to be perceived by customers, as well as difficult to manage; however, it is considered important to develop an SMEs product or service for firm sustainably.
Keywords: Competitive advantage, firm performance, technological capability, small and medium enterprise, SMEs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9903472 Glass Bottle Inspector Based on Machine Vision
Authors: Huanjun Liu, Yaonan Wang, Feng Duan
Abstract:
This text studies glass bottle intelligent inspector based machine vision instead of manual inspection. The system structure is illustrated in detail in this paper. The text presents the method based on watershed transform methods to segment the possible defective regions and extract features of bottle wall by rules. Then wavelet transform are used to exact features of bottle finish from images. After extracting features, the fuzzy support vector machine ensemble is putted forward as classifier. For ensuring that the fuzzy support vector machines have good classification ability, the GA based ensemble method is used to combining the several fuzzy support vector machines. The experiments demonstrate that using this inspector to inspect glass bottles, the accuracy rate may reach above 97.5%.Keywords: Intelligent Inspection, Support Vector Machines, Ensemble Methods, watershed transform, Wavelet Transform
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38953471 Creeping Insulation - Hong Kong Green Wall
Authors: X. L. Zhang, K. L. Li, R. M. Skitmore
Abstract:
Hong Kong is a densely populated city suffering badly from the urban heat island effect. Green wall offers a means of ameliorating the situation but there are doubts over its suitability in Hong Kong’s unique environment. In this paper, we look at the potential for green walls in Hong Kong first by summarizing some of the Chinese green walling systems and associated vegetation in use, then by an introduction to three existing green walls in Hong Kong, and finally through a small experiment aimed at identifying the likely main effects of green walled housing.
The results indicate that green walling in Hong Kong is likely to provide enhanced internal house environment in terms of warm weather temperature reduction, stabilization and damping, with direct energy savings in air-conditioning and indirect district benefits of reduced heat island effect and carbon emissions. The green walling insulation properties also suggest the possibility of warmer homes in winter and/or energy savings in mechanical heating provision.
Keywords: Case studies, experiment, green wall, Hong Kong.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32403470 Bottom Up Text Mining through Hierarchical Document Representation
Authors: Y. Djouadi., F. Souam.
Abstract:
Most of the existing text mining approaches are proposed, keeping in mind, transaction databases model. Thus, the mined dataset is structured using just one concept: the “transaction", whereas the whole dataset is modeled using the “set" abstract type. In such cases, the structure of the whole dataset and the relationships among the transactions themselves are not modeled and consequently, not considered in the mining process. We believe that taking into account structure properties of hierarchically structured information (e.g. textual document, etc ...) in the mining process, can leads to best results. For this purpose, an hierarchical associations rule mining approach for textual documents is proposed in this paper and the classical set-oriented mining approach is reconsidered profits to a Direct Acyclic Graph (DAG) oriented approach. Natural languages processing techniques are used in order to obtain the DAG structure. Based on this graph model, an hierarchical bottom up algorithm is proposed. The main idea is that each node is mined with its parent node.Keywords: Graph based association rules mining, Hierarchical document structure, Text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20583469 Zero Inflated Models for Overdispersed Count Data
Authors: Y. N. Phang, E. F. Loh
Abstract:
The zero inflated models are usually used in modeling count data with excess zeros where the existence of the excess zeros could be structural zeros or zeros which occur by chance. These type of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences which involve sex and health dental epidemiology. The most popular zero inflated models used by many researchers are zero inflated Poisson and zero inflated negative binomial models. In addition, zero inflated generalized Poisson and zero inflated double Poisson models are also discussed and found in some literature. Recently zero inflated inverse trinomial model and zero inflated strict arcsine models are advocated and proven to serve as alternative models in modeling overdispersed count data caused by excessive zeros and unobserved heterogeneity. The purpose of this paper is to review some related literature and provide a variety of examples from different disciplines in the application of zero inflated models. Different model selection methods used in model comparison are discussed.
Keywords: Overdispersed count data, model selection methods, likelihood ratio, AIC, BIC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45323468 Antecedent Factors of Ethical Ideologies in Moral Judgment: Evidence from the Mixed Method Study
Authors: N. Mustamil, M. Quaddus
Abstract:
This research investigates the factors that influence moral judgments when dealing with ethical dilemmas in the organizational context. It also investigates the antecedents of individual ethical ideology (idealism and relativism). A mixed method study, which combines qualitative (field study) and quantitative (survey) approaches, was used in this study. An initial model was developed first, which was then fine-tuned based on field studies. Data were collected from managers in Malaysian large organizations. The results of this study reveal that in-group collectivism culture, power distance culture, parental values, and religiosity were significant as antecedents of ethical ideology. However, direct effects of these variables on moral judgment were not significant. Furthermore, the results of this study confirm the significant effects of ethical ideology on moral judgment. This study provides valuable insight into evaluating the validity of existing theory as proposed in the literature and offers significant practical implications.
Keywords: Antecedents Factors, Ethical Ideology, Mixed Method, Moral Judgment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24263467 Analytical and Experimental Methods of Design for Supersonic Two-Stage Ejectors
Authors: S. Daneshmand, C. Aghanajafi, A. Bahrami
Abstract:
In this paper the supersonic ejectors are experimentally and analytically studied. Ejector is a device that uses the energy of a fluid to move another fluid. This device works like a vacuum pump without usage of piston, rotor or any other moving component. An ejector contains an active nozzle, a passive nozzle, a mixing chamber and a diffuser. Since the fluid viscosity is large, and the flow is turbulent and three dimensional in the mixing chamber, the numerical methods consume long time and high cost to analyze the flow in ejectors. Therefore this paper presents a simple analytical method that is based on the precise governing equations in fluid mechanics. According to achieved analytical relations, a computer code has been prepared to analyze the flow in different components of the ejector. An experiment has been performed in supersonic regime 1.53466 A Retrievable Genetic Algorithm for Efficient Solving of Sudoku Puzzles
Authors: Seyed Mehran Kazemi, Bahare Fatemi
Abstract:
Sudoku is a logic-based combinatorial puzzle game which is popular among people of different ages. Due to this popularity, computer softwares are being developed to generate and solve Sudoku puzzles with different levels of difficulty. Several methods and algorithms have been proposed and used in different softwares to efficiently solve Sudoku puzzles. Various search methods such as stochastic local search have been applied to this problem. Genetic Algorithm (GA) is one of the algorithms which have been applied to this problem in different forms and in several works in the literature. In these works, chromosomes with little or no information were considered and obtained results were not promising. In this paper, we propose a new way of applying GA to this problem which uses more-informed chromosomes than other works in the literature. We optimize the parameters of our GA using puzzles with different levels of difficulty. Then we use the optimized values of the parameters to solve various puzzles and compare our results to another GA-based method for solving Sudoku puzzles.
Keywords: Genetic algorithm, optimization, solving Sudoku puzzles, stochastic local search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37713465 Peakwise Smoothing of Data Models using Wavelets
Authors: D Sudheer Reddy, N Gopal Reddy, P V Radhadevi, J Saibaba, Geeta Varadan
Abstract:
Smoothing or filtering of data is first preprocessing step for noise suppression in many applications involving data analysis. Moving average is the most popular method of smoothing the data, generalization of this led to the development of Savitzky-Golay filter. Many window smoothing methods were developed by convolving the data with different window functions for different applications; most widely used window functions are Gaussian or Kaiser. Function approximation of the data by polynomial regression or Fourier expansion or wavelet expansion also gives a smoothed data. Wavelets also smooth the data to great extent by thresholding the wavelet coefficients. Almost all smoothing methods destroys the peaks and flatten them when the support of the window is increased. In certain applications it is desirable to retain peaks while smoothing the data as much as possible. In this paper we present a methodology called as peak-wise smoothing that will smooth the data to any desired level without losing the major peak features.Keywords: smoothing, moving average, peakwise smoothing, spatialdensity models, planar shape models, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750