Search results for: statistical approach.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5933

Search results for: statistical approach.

5873 Embedding the Dimensions of Sustainability into City Information Modelling

Authors: Ali M. Al-Shaery

Abstract:

The purpose of this paper is to address the functions of sustainability dimensions in city information modelling and to present the required sustainability criteria that support establishing a sustainable planning framework for enhancing existing cities and developing future smart cities. The paper is divided into two sections. The first section is based on the examination of a wide and extensive array of cross-disciplinary literature in the last decade and a half to conceptualize the terms ‘sustainable’ and ‘smart city’, and map their associated criteria to city information modelling. The second section is based on analyzing two approaches relating to city information modelling, namely statistical and dynamic approaches, and their suitability in the development of cities’ action plans. The paper argues that the use of statistical approaches to embed sustainability dimensions in city information modelling have limited value. Despite the popularity of such approaches in addressing other dimensions like utility and service management in development and action plans of the world cities, these approaches are unable to address the dynamics across various city sectors with regards to economic, environmental and social criteria. The paper suggests an integrative dynamic and cross-disciplinary planning approach to embedding sustainability dimensions in city information modelling frameworks. Such an approach will pave the way towards optimal planning and implementation of priority actions of projects and investments. The approach can be used to achieve three main goals: (1) better development and action plans for world cities (2) serve the development of an integrative dynamic and cross-disciplinary framework that incorporates economic, environmental and social sustainability criteria and (3) address areas that require further attention in the development of future sustainable and smart cities. The paper presents an innovative approach for city information modelling and a well-argued, balanced hierarchy of sustainability criteria that can contribute to an area of research which is still in its infancy in terms of development and management.

Keywords: Information modelling, smart city, sustainable city, sustainability dimensions, sustainability criteria, city development planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1122
5872 A Thought on Exotic Statistical Distributions

Authors: R K Sinha

Abstract:

The statistical distributions are modeled in explaining nature of various types of data sets. Although these distributions are mostly uni-modal, it is quite common to see multiple modes in the observed distribution of the underlying variables, which make the precise modeling unrealistic. The observed data do not exhibit smoothness not necessarily due to randomness, but could also be due to non-randomness resulting in zigzag curves, oscillations, humps etc. The present paper argues that trigonometric functions, which have not been used in probability functions of distributions so far, have the potential to take care of this, if incorporated in the distribution appropriately. A simple distribution (named as, Sinoform Distribution), involving trigonometric functions, is illustrated in the paper with a data set. The importance of trigonometric functions is demonstrated in the paper, which have the characteristics to make statistical distributions exotic. It is possible to have multiple modes, oscillations and zigzag curves in the density, which could be suitable to explain the underlying nature of select data set.

Keywords: Exotic Statistical Distributions, Kurtosis, Mixture Distributions, Multi-modal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583
5871 Identification of Seat Belt Wearing Compliance Associate Factors in Malaysia: Evidence-based Approach

Authors: L. Fauziana, M. F. Siti Atiqah, Z. A. Ahmad Noor Syukri

Abstract:

The aim of the study was to identify seat belt wearing factor among road users in Malaysia. Evidence-based approach through in-depth crash investigation was utilised to determine the intended objectives. The objective was scoped into crashes investigated by Malaysian Institute of Road Safety Research (MIROS) involving passenger vehicles within 2007 and 2010. Crash information of a total of 99 crash cases involving 240 vehicles and 864 occupants were obtained during the study period. Statistical test and logistic regression analysis have been performed. Results of the analysis revealed that gender, seat position and age were associated with seat belt wearing compliance in Malaysia. Males are 97.6% more likely to wear seat belt compared to females (95% CI 1.317 to 2.964). By seat position, the finding indicates that frontal occupants were 82 times more likely to be wearing seat belt (95% CI 30.199 to 225.342) as compared to rear occupants. It is also important to note that the odds of seat belt wearing increased by about 2.64% (95% CI 1.0176 to 1.0353) for every one year increase in age. This study is essential in understanding the Malaysian tendency in belting up while being occupied in a vehicle. The factors highlighted in this study should be emphasized in road safety education in order to increase seat belt wearing rate in this country and ultimately in preventing deaths due to road crashes.

Keywords: crash investigation, risk compensation, road safety, seat belt wearing, statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900
5870 Development of Sleep Quality Index Using Heart Rate

Authors: Dongjoo Kim, Chang-Sik Son, Won-Seok Kang

Abstract:

Adequate sleep affects various parts of one’s overall physical and mental life. As one of the methods in determining the appropriate amount of sleep, this research presents a heart rate based sleep quality index. In order to evaluate sleep quality using the heart rate, sleep data from 280 subjects taken over one month are used. Their sleep data are categorized by a three-part heart rate range. After categorizing, some features are extracted, and the statistical significances are verified for these features. The results show that some features of this sleep quality index model have statistical significance. Thus, this heart rate based sleep quality index may be a useful discriminator of sleep.

Keywords: Sleep, sleep quality, heart rate, statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447
5869 Application of Statistical Approach for Optimizing CMCase Production by Bacillus tequilensis S28 Strain via Submerged Fermentation Using Wheat Bran as Carbon Source

Authors: A. Sharma, R. Tewari, S. K. Soni

Abstract:

Biofuels production has come forth as a future technology to combat the problem of depleting fossil fuels. Bio-based ethanol production from enzymatic lignocellulosic biomass degradation serves an efficient method and catching the eye of scientific community. High cost of the enzyme is the major obstacle in preventing the commercialization of this process. Thus main objective of the present study was to optimize composition of medium components for enhancing cellulase production by newly isolated strain of Bacillus tequilensis. Nineteen factors were taken into account using statistical Plackett-Burman Design. The significant variables influencing the cellulose production were further employed in statistical Response Surface Methodology using Central Composite Design for maximizing cellulase production. The optimum medium composition for cellulase production was: peptone (4.94 g/L), ammonium chloride (4.99 g/L), yeast extract (2.00 g/L), Tween-20 (0.53 g/L), calcium chloride (0.20 g/L) and cobalt chloride (0.60 g/L) with pH 7, agitation speed 150 rpm and 72 h incubation at 37oC. Analysis of variance (ANOVA) revealed high coefficient of determination (R2) of 0.99. Maximum cellulase productivity of 11.5 IU/ml was observed against the model predicted value of 13 IU/ml. This was found to be optimally active at 60oC and pH 5.5.

Keywords: Bacillus tequilensis, CMCase, Submerged Fermentation, Optimization, Plackett-Burman Design, Response Surface Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3014
5868 Comparing Transformational Leadership in Successful and Unsuccessful Companies

Authors: Gh. Jandaghi, H. Zarei Matin, A. Farjami

Abstract:

In this article, while it is attempted to describe the problem and its importance, transformational leadership is studied by considering leadership theories. Issues such as the definition of transformational leadership and its aspects are compared on the basis of the ideas of various connoisseurs and then it (transformational leadership) is examined in successful and unsuccessful companies. According to the methodology, the method of research, hypotheses, population and statistical sample are investigated and research findings are analyzed by using descriptive and inferential statistical methods in the framework of analytical tables. Finally, our conclusion is provided by considering the results of statistical tests. The final result shows that transformational leadership is significantly higher in successful companies than unsuccessful ones P<0.0001).

Keywords: Idealized influence, individualized considerations, inspirational motivation, intellectual stimulation, transformational leadership

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4556
5867 Estimation of PM2.5 Emissions and Source Apportionment Using Receptor and Dispersion Models

Authors: Swetha Priya Darshini Thammadi, Sateesh Kumar Pisini, Sanjay Kumar Shukla

Abstract:

Source apportionment using Dispersion model depends primarily on the quality of Emission Inventory. In the present study, a CMB receptor model has been used to identify the sources of PM2.5, while the AERMOD dispersion model has been used to account for missing sources of PM2.5 in the Emission Inventory. A statistical approach has been developed to quantify the missing sources not considered in the Emission Inventory. The inventory of each grid was improved by adjusting emissions based on road lengths and deficit in measured and modelled concentrations. The results showed that in CMB analyses, fugitive sources - soil and road dust - contribute significantly to ambient PM2.5 pollution. As a result, AERMOD significantly underestimated the ambient air concentration at most locations. The revised Emission Inventory showed a significant improvement in AERMOD performance which is evident through statistical tests.

Keywords: CMB, GIS, AERMOD, PM2.5, fugitive, emission inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 834
5866 Statistical Models of Network Traffic

Authors: Barath Kumar, Oliver Niggemann, Juergen Jasperneite

Abstract:

Model-based approaches have been applied successfully to a wide range of tasks such as specification, simulation, testing, and diagnosis. But one bottleneck often prevents the introduction of these ideas: Manual modeling is a non-trivial, time-consuming task. Automatically deriving models by observing and analyzing running systems is one possible way to amend this bottleneck. To derive a model automatically, some a-priori knowledge about the model structure–i.e. about the system–must exist. Such a model formalism would be used as follows: (i) By observing the network traffic, a model of the long-term system behavior could be generated automatically, (ii) Test vectors can be generated from the model, (iii) While the system is running, the model could be used to diagnose non-normal system behavior. The main contribution of this paper is the introduction of a model formalism called 'probabilistic regression automaton' suitable for the tasks mentioned above.

Keywords: Model-based approach, Probabilistic regression automata, Statistical models and Timed automata.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
5865 sEMG Interface Design for Locomotion Identification

Authors: Rohit Gupta, Ravinder Agarwal

Abstract:

Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.

Keywords: Classifiers, feature selection, locomotion, sEMG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1440
5864 Statistical Analysis of Interferon-γ for the Effectiveness of an Anti-Tuberculous Treatment

Authors: Shishen Xie, Yingda L. Xie

Abstract:

Tuberculosis (TB) is a potentially serious infectious disease that remains a health concern. The Interferon Gamma Release Assay (IGRA) is a blood test to find out if an individual is tuberculous positive or negative. This study applies statistical analysis to the clinical data of interferon-gamma levels of seventy-three subjects who diagnosed pulmonary TB in an anti-tuberculous treatment. Data analysis is performed to determine if there is a significant decline in interferon-gamma levels for the subjects during a period of six months, and to infer if the anti-tuberculous treatment is effective.

Keywords: Data analysis, interferon gamma release assay, statistical methods, tuberculosis infection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
5863 A Propagator Method like Algorithm for Estimation of Multiple Real-Valued Sinusoidal Signal Frequencies

Authors: Sambit Prasad Kar, P.Palanisamy

Abstract:

In this paper a novel method for multiple one dimensional real valued sinusoidal signal frequency estimation in the presence of additive Gaussian noise is postulated. A computationally simple frequency estimation method with efficient statistical performance is attractive in many array signal processing applications. The prime focus of this paper is to combine the subspace-based technique and a simple peak search approach. This paper presents a variant of the Propagator Method (PM), where a collaborative approach of SUMWE and Propagator method is applied in order to estimate the multiple real valued sine wave frequencies. A new data model is proposed, which gives the dimension of the signal subspace is equal to the number of frequencies present in the observation. But, the signal subspace dimension is twice the number of frequencies in the conventional MUSIC method for estimating frequencies of real-valued sinusoidal signal. The statistical analysis of the proposed method is studied, and the explicit expression of asymptotic (large-sample) mean-squared-error (MSE) or variance of the estimation error is derived. The performance of the method is demonstrated, and the theoretical analysis is substantiated through numerical examples. The proposed method can achieve sustainable high estimation accuracy and frequency resolution at a lower SNR, which is verified by simulation by comparing with conventional MUSIC, ESPRIT and Propagator Method.

Keywords: Frequency estimation, peak search, subspace-based method without eigen decomposition, quadratic convex function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702
5862 Statistical Analysis of Different Configurations of Hybrid Doped Fiber Amplifiers

Authors: Inderpreet Kaur, Neena Gupta

Abstract:

Wavelength multiplexing (WDM) technology along with optical amplifiers is used for optical communication systems in S-band, C-band and L-band. To improve the overall system performance Hybrid amplifiers consisting of cascaded TDFA and EDFA with different gain bandwidths are preferred for long haul wavelength multiplexed optical communication systems. This paper deals with statistical analysis of different configuration of hybrid amplifier i.e. analysis of TDFA-EDFA configuration and EDFA – TDFA configuration. In this paper One-Way ANOVA method is used for statistical analysis.

Keywords: WDM, EDFA, TDFA, hybrid amplifier, One-wayANOVA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1816
5861 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles

Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi

Abstract:

Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.

Keywords: Artificial neural networks, fuel consumption, machine learning, regression, statistical tests.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 760
5860 A Usability Testing Approach to Evaluate User-Interfaces in Business Administration

Authors: Salaheddin Odeh, Ibrahim O. Adwan

Abstract:

This interdisciplinary study is an investigation to evaluate user-interfaces in business administration. The study is going to be implemented on two computerized business administration systems with two distinctive user-interfaces, so that differences between the two systems can be determined. Both systems, a commercial and a prototype developed for the purpose of this study, deal with ordering of supplies, tendering procedures, issuing purchase orders, controlling the movement of the stocks against their actual balances on the shelves and editing them on their tabulations. In the second suggested system, modern computer graphics and multimedia issues were taken into consideration to cover the drawbacks of the first system. To highlight differences between the two investigated systems regarding some chosen standard quality criteria, the study employs various statistical techniques and methods to evaluate the users- interaction with both systems. The study variables are divided into two divisions: independent representing the interfaces of the two systems, and dependent embracing efficiency, effectiveness, satisfaction, error rate etc.

Keywords: Evaluation and usability testing, software prototyping, statistical methods, user-interface design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418
5859 Statistical Modeling for Permeabilization of a Novel Yeast Isolate for β-Galactosidase Activity Using Organic Solvents

Authors: Shweta Kumari, Parmjit S. Panesar, Manab B. Bera

Abstract:

The hydrolysis of lactose using β-galactosidase is one of the most promising biotechnological applications, which has wide range of potential applications in food processing industries. However, due to intracellular location of the yeast enzyme, and expensive extraction methods, the industrial applications of enzymatic hydrolysis processes are being hampered. The use of permeabilization technique can help to overcome the problems associated with enzyme extraction and purification of yeast cells and to develop the economically viable process for the utilization of whole cell biocatalysts in food industries. In the present investigation, standardization of permeabilization process of novel yeast isolate was carried out using a statistical model approach known as Response Surface Methodology (RSM) to achieve maximal b-galactosidase activity. The optimum operating conditions for permeabilization process for optimal β-galactosidase activity obtained by RSM were 1:1 ratio of toluene (25%, v/v) and ethanol (50%, v/v), 25.0 oC temperature and treatment time of 12 min, which displayed enzyme activity of 1.71 IU /mg DW.

Keywords: β-galactosidase, optimization, permeabilization, response surface methodology, yeast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4091
5858 Process Capability Analysis by Using Statistical Process Control of Rice Polished Cylinder Turning Practice

Authors: S. Bangphan, P. Bangphan, T. Boonkang

Abstract:

Quality control helps industries in improvements of its product quality and productivity. Statistical Process Control (SPC) is one of the tools to control the quality of products that turning practice in bringing a department of industrial engineering process under control. In this research, the process control of a turning manufactured at workshops machines. The varying measurements have been recorded for a number of samples of a rice polished cylinder obtained from a number of trials with the turning practice. SPC technique has been adopted by the process is finally brought under control and process capability is improved.

Keywords: Rice polished cylinder, statistical process control, control charts, process capability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3667
5857 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: Co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 656
5856 The Recreation Technique Model from the Perspective of Environmental Quality Elements

Authors: G. Gradinaru, S. Olteanu

Abstract:

The quality improvements of the environmental elements could increase the recreational opportunities in a certain area (destination). The technique of the need for recreation focuses on choosing certain destinations for recreational purposes. The basic exchange taken into consideration is the one between the satisfaction gained after staying in that area and the value expressed in money and time allocated. The number of tourists in the respective area, the duration of staying and the money spent including transportation provide information on how individuals rank the place or certain aspects of the area (such as the quality of the environmental elements). For the statistical analysis of the environmental benefits offered by an area through the need of recreation technique, the following stages are suggested: - characterization of the reference area based on the statistical variables considered; - estimation of the environmental benefit through comparing the reference area with other similar areas (having the same environmental characteristics), from the perspective of the statistical variables considered. The model compared in recreation technique faced with a series of difficulties which refers to the reference area and correct transformation of time in money.

Keywords: Comparison in recreation technique, the quality of the environmental elements, statistical analysis model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1050
5855 Screening of Process Variables for the Production of Extracellular Lipase from Palm Oil by Trichoderma Viride using Plackett-Burman Design

Authors: R. Rajendiran, S. Gayathri devi, B.T. SureshKumar, V. Arul Priya

Abstract:

Plackett-Burman statistical screening of media constituents and operational conditions for extracellular lipase production from isolate Trichoderma viride has been carried out in submerged fermentation. This statistical design is used in the early stages of experimentation to screen out unimportant factors from a large number of possible factors. This design involves screening of up to 'n-1' variables in just 'n' number of experiments. Regression coefficients and t-values were calculated by subjecting the experimental data to statistical analysis using Minitab version 15. The effects of nine process variables were studied in twelve experimental trials. Maximum lipase activity of 7.83 μmol /ml /min was obtained in the 6th trail. Pareto chart illustrates the order of significance of the variables affecting the lipase production. The present study concludes that the most significant variables affecting lipase production were found to be palm oil, yeast extract, K2HPO4, MgSO4 and CaCl2.

Keywords: lipase, submerged fermentation, statistical optimization, Trichoderma viride

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2267
5854 Series-Parallel Systems Reliability Optimization Using Genetic Algorithm and Statistical Analysis

Authors: Essa Abrahim Abdulgader Saleem, Thien-My Dao

Abstract:

The main objective of this paper is to optimize series-parallel system reliability using Genetic Algorithm (GA) and statistical analysis; considering system reliability constraints which involve the redundant numbers of selected components, total cost, and total weight. To perform this work, firstly the mathematical model which maximizes system reliability subject to maximum system cost and maximum system weight constraints is presented; secondly, a statistical analysis is used to optimize GA parameters, and thirdly GA is used to optimize series-parallel systems reliability. The objective is to determine the strategy choosing the redundancy level for each subsystem to maximize the overall system reliability subject to total cost and total weight constraints. Finally, the series-parallel system case study reliability optimization results are showed, and comparisons with the other previous results are presented to demonstrate the performance of our GA.

Keywords: Genetic algorithm, optimization, reliability, statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1108
5853 N-Grams: A Tool for Repairing Word Order Errors in Ill-formed Texts

Authors: Theologos Athanaselis, Stelios Bakamidis, Ioannis Dologlou, Konstantinos Mamouras

Abstract:

This paper presents an approach for repairing word order errors in English text by reordering words in a sentence and choosing the version that maximizes the number of trigram hits according to a language model. A possible way for reordering the words is to use all the permutations. The problem is that for a sentence with length N words the number of all permutations is N!. The novelty of this method concerns the use of an efficient confusion matrix technique for reordering the words. The confusion matrix technique has been designed in order to reduce the search space among permuted sentences. The limitation of search space is succeeded using the statistical inference of N-grams. The results of this technique are very interesting and prove that the number of permuted sentences can be reduced by 98,16%. For experimental purposes a test set of TOEFL sentences was used and the results show that more than 95% can be repaired using the proposed method.

Keywords: Permutations filtering, Statistical language model N-grams, Word order errors, TOEFL

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619
5852 A Mark-Up Approach to Add Value

Authors: Ivaylo I. Atanasov, Evelina N.Pencheva

Abstract:

This paper presents a mark-up approach to service creation in Next Generation Networks. The approach allows deriving added value from network functions exposed by Parlay/OSA (Open Service Access) interfaces. With OSA interfaces service logic scripts might be executed both on callrelated and call-unrelated events. To illustrate the approach XMLbased language constructions for data and method definitions, flow control, time measuring and supervision and database access are given and an example of OSA application is considered.

Keywords: Service creation, mark-up approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
5851 An Alternative Method for Generating Almost Infinite Sequence of Gaussian Variables

Authors: Nyah C. Temaneh, F. A. Phiri, E. Ruhunga

Abstract:

Most of the well known methods for generating Gaussian variables require at least one standard uniform distributed value, for each Gaussian variable generated. The length of the random number generator therefore, limits the number of independent Gaussian distributed variables that can be generated meanwhile the statistical solution of complex systems requires a large number of random numbers for their statistical analysis. We propose an alternative simple method of generating almost infinite number of Gaussian distributed variables using a limited number of standard uniform distributed random numbers.

Keywords: Gaussian variable, statistical analysis, simulation ofCommunication Network, Random numbers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1432
5850 A Fuzzy Tumor Volume Estimation Approach Based On Fuzzy Segmentation of MR Images

Authors: Sara A.Yones, Ahmed S. Moussa

Abstract:

Quantitative measurements of tumor in general and tumor volume in particular, become more realistic with the use of Magnetic Resonance imaging, especially when the tumor morphological changes become irregular and difficult to assess by clinical examination. However, tumor volume estimation strongly depends on the image segmentation, which is fuzzy by nature. In this paper a fuzzy approach is presented for tumor volume segmentation based on the fuzzy connectedness algorithm. The fuzzy affinity matrix resulting from segmentation is then used to estimate a fuzzy volume based on a certainty parameter, an Alpha Cut, defined by the user. The proposed method was shown to highly affect treatment decisions. A statistical analysis was performed in this study to validate the results based on a manual method for volume estimation and the importance of using the Alpha Cut is further explained.

Keywords: Alpha Cut, Fuzzy Connectedness, Magnetic Resonance Imaging, Tumor volume estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2347
5849 Brain MRI Segmentation and Lesions Detection by EM Algorithm

Authors: Mounira Rouaïnia, Mohamed Salah Medjram, Noureddine Doghmane

Abstract:

In Multiple Sclerosis, pathological changes in the brain results in deviations in signal intensity on Magnetic Resonance Images (MRI). Quantitative analysis of these changes and their correlation with clinical finding provides important information for diagnosis. This constitutes the objective of our work. A new approach is developed. After the enhancement of images contrast and the brain extraction by mathematical morphology algorithm, we proceed to the brain segmentation. Our approach is based on building statistical model from data itself, for normal brain MRI and including clustering tissue type. Then we detect signal abnormalities (MS lesions) as a rejection class containing voxels that are not explained by the built model. We validate the method on MR images of Multiple Sclerosis patients by comparing its results with those of human expert segmentation.

Keywords: EM algorithm, Magnetic Resonance Imaging, Mathematical morphology, Markov random model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2116
5848 Pakistan Sign Language Recognition Using Statistical Template Matching

Authors: Aleem Khalid Alvi, M. Yousuf Bin Azhar, Mehmood Usman, Suleman Mumtaz, Sameer Rafiq, RaziUr Rehman, Israr Ahmed

Abstract:

Sign language recognition has been a topic of research since the first data glove was developed. Many researchers have attempted to recognize sign language through various techniques. However none of them have ventured into the area of Pakistan Sign Language (PSL). The Boltay Haath project aims at recognizing PSL gestures using Statistical Template Matching. The primary input device is the DataGlove5 developed by 5DT. Alternative approaches use camera-based recognition which, being sensitive to environmental changes are not always a good choice.This paper explains the use of Statistical Template Matching for gesture recognition in Boltay Haath. The system recognizes one handed alphabet signs from PSL.

Keywords: Gesture Recognition, Pakistan Sign Language, DataGlove, Human Computer Interaction, Template Matching, BoltayHaath

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2962
5847 Maximizer of the Posterior Marginal Estimate of Phase Unwrapping Based On Statistical Mechanics of the Q-Ising Model

Authors: Yohei Saika, Tatsuya Uezu

Abstract:

We constructed a method of phase unwrapping for a typical wave-front by utilizing the maximizer of the posterior marginal (MPM) estimate corresponding to equilibrium statistical mechanics of the three-state Ising model on a square lattice on the basis of an analogy between statistical mechanics and Bayesian inference. We investigated the static properties of an MPM estimate from a phase diagram using Monte Carlo simulation for a typical wave-front with synthetic aperture radar (SAR) interferometry. The simulations clarified that the surface-consistency conditions were useful for extending the phase where the MPM estimate was successful in phase unwrapping with a high degree of accuracy and that introducing prior information into the MPM estimate also made it possible to extend the phase under the constraint of the surface-consistency conditions with a high degree of accuracy. We also found that the MPM estimate could be used to reconstruct the original wave-fronts more smoothly, if we appropriately tuned hyper-parameters corresponding to temperature to utilize fluctuations around the MAP solution. Also, from the viewpoint of statistical mechanics of the Q-Ising model, we found that the MPM estimate was regarded as a method for searching the ground state by utilizing thermal fluctuations under the constraint of the surface-consistency condition.

Keywords: Bayesian inference, maximizer of the posterior marginal estimate, phase unwrapping, Monte Carlo simulation, statistical mechanics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666
5846 Model-Based Small Area Estimation with Application to Unemployment Estimates

Authors: Hichem Omrani, Philippe Gerber, Patrick Bousch

Abstract:

The problem of Small Area Estimation (SAE) is complex because of various information sources and insufficient data. In this paper, an approach for SAE is presented for decision-making at national, regional and local level. We propose an Empirical Best Linear Unbiased Predictor (EBLUP) as an estimator in order to combine several information sources to evaluate various indicators. First, we present the urban audit project and its environmental, social and economic indicators. Secondly, we propose an approach for decision making in order to estimate indicators. An application is used to validate the theoretical proposal. Finally, a decision support system is presented based on open-source environment.

Keywords: Small area estimation, statistical method, sampling, empirical best linear unbiased predictor (EBLUP), decision-making.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667
5845 Monitoring Patents Using the Statistical Process Control

Authors: Stephanie Russo Fabris, Edmara Thays Neres Menezes, Ruirogeres dos Santos Cruz, Lucio Leonardo Siqueira Santos, Suzana Leitao Russo

Abstract:

The statistical process control (SPC) is one of the most powerful tools developed to assist ineffective control of quality, involves collecting, organizing and interpreting data during production. This article aims to show how the use of CEP industries can control and continuously improve product quality through monitoring of production that can detect deviations of parameters representing the process by reducing the amount of off-specification products and thus the costs of production. This study aimed to conduct a technological forecasting in order to characterize the research being done related to the CEP. The survey was conducted in the databases Spacenet, WIPO and the National Institute of Industrial Property (INPI). Among the largest are the United States depositors and deposits via PCT, the classification section that was presented in greater abundance to F.

Keywords: Statistical Process Control, Industries

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
5844 Using the Simple Fixed Rate Approach to Solve Economic Lot Scheduling Problem under the Basic Period Approach

Authors: Yu-Jen Chang, Yun Chen, Hei-Lam Wong

Abstract:

The Economic Lot Scheduling Problem (ELSP) is a valuable mathematical model that can support decision-makers to make scheduling decisions. The basic period approach is effective for solving the ELSP. The assumption for applying the basic period approach is that a product must use its maximum production rate to be produced. However, a product can lower its production rate to reduce the average total cost when a facility has extra idle time. The past researches discussed how a product adjusts its production rate under the common cycle approach. To the best of our knowledge, no studies have addressed how a product lowers its production rate under the basic period approach. This research is the first paper to discuss this topic. The research develops a simple fixed rate approach that adjusts the production rate of a product under the basic period approach to solve the ELSP. Our numerical example shows our approach can find a better solution than the traditional basic period approach. Our mathematical model that applies the fixed rate approach under the basic period approach can serve as a reference for other related researches.

Keywords: Economic Lot, Basic Period, Genetic Algorithm, Fixed Rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899