Search results for: statistical methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4912

Search results for: statistical methods

4162 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1955
4161 Prediction of the Lateral Bearing Capacity of Short Piles in Clayey Soils Using Imperialist Competitive Algorithm-Based Artificial Neural Networks

Authors: Reza Dinarvand, Mahdi Sadeghian, Somaye Sadeghian

Abstract:

Prediction of the ultimate bearing capacity of piles (Qu) is one of the basic issues in geotechnical engineering. So far, several methods have been used to estimate Qu, including the recently developed artificial intelligence methods. In recent years, optimization algorithms have been used to minimize artificial network errors, such as colony algorithms, genetic algorithms, imperialist competitive algorithms, and so on. In the present research, artificial neural networks based on colonial competition algorithm (ANN-ICA) were used, and their results were compared with other methods. The results of laboratory tests of short piles in clayey soils with parameters such as pile diameter, pile buried length, eccentricity of load and undrained shear resistance of soil were used for modeling and evaluation. The results showed that ICA-based artificial neural networks predicted lateral bearing capacity of short piles with a correlation coefficient of 0.9865 for training data and 0.975 for test data. Furthermore, the results of the model indicated the superiority of ICA-based artificial neural networks compared to back-propagation artificial neural networks as well as the Broms and Hansen methods.

Keywords: Lateral bearing capacity, short pile, clayey soil, artificial neural network, Imperialist competition algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 915
4160 A Comparison of Energy Calculations for a Single-Family Detached Home with Two Energy Simulation Methods

Authors: Amir Sattari

Abstract:

For newly produced houses and energy renovations, an energy calculation needs to be conducted. This is done to verify whether the energy consumption criteria of the house -to reach the energy targets by 2020 and 2050- are in-line with the norms. The main purpose of this study is to confirm whether easy to use energy calculation software or hand calculations used by small companies or individuals give logical results compared to advanced energy simulation program used by researchers or bigger companies. There are different methods for calculating energy consumption. In this paper, two energy calculation programs are used and the relation of energy consumption with solar radiation is compared. A hand calculation is also done to validate whether the hand calculations are still reasonable. The two computer programs which have been used are TMF Energi (the easy energy calculation variant used by small companies or individuals) and IDA ICE - Indoor Climate and Energy (the advanced energy simulation program used by researchers or larger companies). The calculations are done for a standard house from the Swedish house supplier Fiskarhedenvillan. The method is based on having the same conditions and inputs in the different calculation forms so that the results can be compared and verified. The house has been faced differently to see how the orientation affects energy consumption in different methods. The results for the simulations are close to each other and the hand calculation differs from the computer programs by only 5%. Even if solar factors differ due to the orientation of the house, energy calculation results from different computer programs and even hand calculation methods are in line with each other.

Keywords: Energy calculation, energy consumption, energy simulation, IDA ICE, TMF Energi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018
4159 3D Shape Knitting: Loop Alignment on a Surface with Positive Gaussian Curvature

Authors: C. T. Cheung, R. K. P. Ng, T. Y. Lo, Zhou Jinyun

Abstract:

This paper aims at manipulating loop alignment in knitting a three-dimensional (3D) shape by its geometry. Two loop alignment methods are introduced to handle a surface with positive Gaussian curvature. As weft knitting is a two-dimensional (2D) knitting mechanism that the knitting cam carrying the feeders moves in two directions only, left and right, the knitted fabric generated grows in width and length but not in depth. Therefore, a 3D shape is required to be flattened to a 2D plane with surface area preserved for knitting. On this flattened plane, dimensional measurements are taken for loop alignment. The way these measurements being taken derived two different loop alignment methods. In this paper, only plain knitted structure was considered. Each knitted loop was taken as a basic unit for loop alignment in order to achieve the required geometric dimensions, without the inclusion of other stitches which give textural dimensions to the fabric. Two loop alignment methods were experimented and compared. Only one of these two can successfully preserve the dimensions of the shape.

Keywords: 3D knitting, 3D shape, loop alignment, positive Gaussian curvature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
4158 Calculation of Voided Slabs Rigidities

Authors: Gee-Cheol Kim, Joo-Won Kang

Abstract:

A theoretical study of the rigidities of slabs with circular voids oriented in the longitudinal and in the transverse direction is discussed. Equations are presented for predicting the bending and torsional rigidities of the voided slabs. This paper summarizes the results of an extensive literature search and initial review of the current methods of analyzing voided slab. The various methods of calculating the equivalent plate parameters, which are necessary for two-dimensional analysis, are also reviewed. Static deflections on voided slabs are shown to be in good agreement with proposed equation.

Keywords: voided slab, bending rigidity, torsional rigidity, orthotropic plate

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3841
4157 Rapid Processing Techniques Applied to Sintered Nickel Battery Technologies for Utility Scale Applications

Authors: J. D. Marinaccio, I. Mabbett, C. Glover, D. Worsley

Abstract:

Through use of novel modern/rapid processing techniques such as screen printing and Near-Infrared (NIR) radiative curing, process time for the sintering of sintered nickel plaques, applicable to alkaline nickel battery chemistries, has been drastically reduced from in excess of 200 minutes with conventional convection methods to below 2 minutes using NIR curing methods. Steps have also been taken to remove the need for forming gas as a reducing agent by implementing carbon as an in-situ reducing agent, within the ink formulation.

Keywords: Batteries, energy, iron, nickel, storage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2327
4156 Body Composition Response to Lower Body Positive Pressure Training in Obese Children

Authors: Basant H. El-Refay, Nabeel T. Faiad

Abstract:

Background: The high prevalence of obesity in Egypt has a great impact on the health care system, economic and social situation. Evidence suggests that even a moderate amount of weight loss can be useful. Aim of the study: To analyze the effects of lower body positive pressure supported treadmill training, conducted with hypocaloric diet, on body composition of obese children. Methods: Thirty children aged between 8 and 14 years, were randomly assigned into two groups: intervention group (15 children) and control group (15 children). All of them were evaluated using body composition analysis through bioelectric impedance. The following parameters were measured before and after the intervention: body mass, body fat mass, muscle mass, body mass index (BMI), percentage of body fat and basal metabolic rate (BMR). The study group exercised with antigravity treadmill three times a week during 2 months, and participated in a hypocaloric diet program. The control group participated in a hypocaloric diet program only. Results: Both groups showed significant reduction in body mass, body fat mass and BMI. Only study group showed significant reduction in percentage of body fat (p = 0.0.043). Changes in muscle mass and BMR didn't reach statistical significance in both groups. No significant differences were observed between groups except for muscle mass (p = 0.049) and BMR (p = 0.042) favoring study group. Conclusion: Both programs proved effective in the reduction of obesity indicators, but lower body positive pressure supported treadmill training was more effective in improving muscle mass and BMR.

Keywords: Children, Hypocaloric diet, Lower body positive pressure supported treadmill, obesity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4307
4155 Efficient Tools for Managing Uncertainties in Design and Operation of Engineering Structures

Authors: J. Menčík

Abstract:

Actual load, material characteristics and other quantities often differ from the design values. This can cause worse function, shorter life or failure of a civil engineering structure, a machine, vehicle or another appliance. The paper shows main causes of the uncertainties and deviations and presents a systematic approach and efficient tools for their elimination or mitigation of consequences. Emphasis is put on the design stage, which is most important for reliability ensuring. Principles of robust design and important tools are explained, including FMEA, sensitivity analysis and probabilistic simulation methods. The lifetime prediction of long-life objects can be improved by long-term monitoring of the load response and damage accumulation in operation. The condition evaluation of engineering structures, such as bridges, is often based on visual inspection and verbal description. Here, methods based on fuzzy logic can reduce the subjective influences.

Keywords: Design, fuzzy methods, Monte Carlo, reliability, robust design, sensitivity analysis, simulation, uncertainties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1801
4154 Urbanization and Income Inequality in Thailand

Authors: Acumsiri Tantiakrnpanit

Abstract:

This paper aims to examine the relationship between urbanization and income inequality in Thailand during the period 2002–2020, using a panel of data for 76 provinces collected from Thailand’s National Statistical Office (Labor Force Survey: LFS), as well as geospatial data from the U.S. Air Force Defense Meteorological Satellite Program (DMSP) and the Visible Infrared Imaging Radiometer Suite Day/Night band (VIIRS-DNB) satellite for 19 selected years. This paper employs two different definitions to identify urban areas: 1) Urban areas defined by Thailand's National Statistical Office (LFS), and 2) Urban areas estimated using nighttime light data from the DMSP and VIIRS-DNB satellite. The second method includes two sub-categories: 2.1) Determining urban areas by calculating nighttime light density with a population density of 300 people per square kilometer, and 2.2) Calculating urban areas based on nighttime light density corresponding to a population density of 1,500 people per square kilometer. The empirical analysis based on Ordinary Least Squares (OLS), fixed effects, and random effects models reveals a consistent U-shaped relationship between income inequality and urbanization. The findings from the econometric analysis demonstrate that urbanization or population density has a significant and negative impact on income inequality. Moreover, the square of urbanization shows a statistically significant positive impact on income inequality. Additionally, there is a negative association between logarithmically transformed income and income inequality. This paper also proposes the inclusion of satellite imagery, geospatial data, and spatial econometric techniques in future studies to conduct quantitative analysis of spatial relationships.

Keywords: Income inequality, nighttime light, population density, Thailand, urbanization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 75
4153 Identification of Factors Influencing Company's Competitiveness

Authors: D. Ščeulovs, E. Gaile-Sarkane

Abstract:

Fast development of technologies, economic globalization and many other external circumstances stimulate company’s competitiveness. One of the major trends in today’s business is the shift to the exploitation of the Internet and electronic environment for entrepreneurial needs. Latest researches confirm that e-environment provides a range of possibilities and opportunities for companies, especially for micro-, small- and medium-sized companies, which have limited resources. The usage of e-tools raises the effectiveness and the profitability of an organization, as well as its competitiveness. In the electronic market, as in the classic one, there are factors, such as globalization, development of new technology, price sensitive consumers, Internet, new distribution and communication channels that influence entrepreneurship. As a result of eenvironment development, e-commerce and e-marketing grow as well.

Objective of the paper: To describe and identify factors influencing company’s competitiveness in e-environment.

Research methodology: The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistics method, factor analysis in SPSS 20 environment, etc. The theoretical and methodological background of the research is formed by using scientific researches and publications, such as that from mass media and professional literature; statistical information from legal institutions as well as information collected by the authors during the surveying process. Research result: The authors detected and classified factors influencing competitiveness in e-environment. 

In this paper, the authors presented their findings based on theoretical, scientific, and field research. Authors have conducted a research on e-environment utilization among Latvian enterprises. 

Keywords: Competitiveness, e-environment, factors, factor analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2075
4152 Cellular Automata Based Robust Watermarking Architecture towards the VLSI Realization

Authors: V. H. Mankar, T. S. Das, S. K. Sarkar

Abstract:

In this paper, we have proposed a novel blind watermarking architecture towards its hardware implementation in VLSI. In order to facilitate this hardware realization, cellular automata (CA) concept is introduced. The CA has been already accepted as an attractive structure for VLSI implementation because of its modularity, parallelism, high performance and reliability. The hardware realizable multiresolution spread spectrum watermarking techniques are very few in numbers in spite of their best ever resiliency against signal impairments. This is because of the computational cost and complexity associated with their different filter banks and lifting techniques. The concept of cellular automata theory in order to form a new transform domain technique i.e. Cellular Automata Transform (CAT) have been incorporated. Since CA provides spreading sequences having very low cross-correlation properties, the CA based pseudorandom sequence generator is considered in the present work. Considering the watermarking technique as a digital communication process, an error control coding (ECC) must be incorporated in the data hiding schemes. Besides the hardware implementation of entire CA based data hiding technique, the individual blocks of the algorithm using CA provide the best result than that of some other methods irrespective of the hardware and software technique. The Cellular Automata Transform, CA based PN sequence generator, and CA ECC are the requisite blocks that are developed not only to meet the reliable hardware requirements but also for the basic spread spectrum watermarking features. The proposed algorithm shows statistical invisibility and resiliency against various common signal-processing operations. This algorithmic design utilizes the existing allocated bandwidth in the data transmission channel in a more efficient manner.

Keywords: Cellular automata, watermarking, error control coding, PN sequence, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2054
4151 A Preliminary Study on the Suitability of Data Driven Approach for Continuous Water Level Modeling

Authors: Muhammad Aqil, Ichiro Kita, Moses Macalinao

Abstract:

Reliable water level forecasts are particularly important for warning against dangerous flood and inundation. The current study aims at investigating the suitability of the adaptive network based fuzzy inference system for continuous water level modeling. A hybrid learning algorithm, which combines the least square method and the back propagation algorithm, is used to identify the parameters of the network. For this study, water levels data are available for a hydrological year of 2002 with a sampling interval of 1-hour. The number of antecedent water level that should be included in the input variables is determined by two statistical methods, i.e. autocorrelation function and partial autocorrelation function between the variables. Forecasting was done for 1-hour until 12-hour ahead in order to compare the models generalization at higher horizons. The results demonstrate that the adaptive networkbased fuzzy inference system model can be applied successfully and provide high accuracy and reliability for river water level estimation. In general, the adaptive network-based fuzzy inference system provides accurate and reliable water level prediction for 1-hour ahead where the MAPE=1.15% and correlation=0.98 was achieved. Up to 12-hour ahead prediction, the model still shows relatively good performance where the error of prediction resulted was less than 9.65%. The information gathered from the preliminary results provide a useful guidance or reference for flood early warning system design in which the magnitude and the timing of a potential extreme flood are indicated.

Keywords: Neural Network, Fuzzy, River, Forecasting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1275
4150 Combining Bagging and Boosting

Authors: S. B. Kotsiantis, P. E. Pintelas

Abstract:

Bagging and boosting are among the most popular resampling ensemble methods that generate and combine a diversity of classifiers using the same learning algorithm for the base-classifiers. Boosting algorithms are considered stronger than bagging on noisefree data. However, there are strong empirical indications that bagging is much more robust than boosting in noisy settings. For this reason, in this work we built an ensemble using a voting methodology of bagging and boosting ensembles with 10 subclassifiers in each one. We performed a comparison with simple bagging and boosting ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique was the most accurate.

Keywords: data mining, machine learning, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2544
4149 New Newton's Method with Third-order Convergence for Solving Nonlinear Equations

Authors: Osama Yusuf Ababneh

Abstract:

For the last years, the variants of the Newton-s method with cubic convergence have become popular iterative methods to find approximate solutions to the roots of non-linear equations. These methods both enjoy cubic convergence at simple roots and do not require the evaluation of second order derivatives. In this paper, we present a new Newton-s method based on contra harmonic mean with cubically convergent. Numerical examples show that the new method can compete with the classical Newton's method.

Keywords: Third-order convergence, non-linear equations, root finding, iterative method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2941
4148 Selecting an Advanced Creep Model or a Sophisticated Time-Integration? A New Approach by Means of Sensitivity Analysis

Authors: Holger Keitel

Abstract:

The prediction of long-term deformations of concrete and reinforced concrete structures has been a field of extensive research and several different creep models have been developed so far. Most of the models were developed for constant concrete stresses, thus, in case of varying stresses a specific superposition principle or time-integration, respectively, is necessary. Nowadays, when modeling concrete creep the engineering focus is rather on the application of sophisticated time-integration methods than choosing the more appropriate creep model. For this reason, this paper presents a method to quantify the uncertainties of creep prediction originating from the selection of creep models or from the time-integration methods. By adapting variance based global sensitivity analysis, a methodology is developed to quantify the influence of creep model selection or choice of time-integration method. Applying the developed method, general recommendations how to model creep behavior for varying stresses are given.

Keywords: Concrete creep models, time-integration methods, sensitivity analysis, prediction uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
4147 Screen of MicroRNA Targets in Zebrafish Using Heterogeneous Data Sources: A Case Study for Dre-miR-10 and Dre-miR-196

Authors: Yanju Zhang, Joost M. Woltering, Fons J. Verbeek

Abstract:

It has been established that microRNAs (miRNAs) play an important role in gene expression by post-transcriptional regulation of messengerRNAs (mRNAs). However, the precise relationships between microRNAs and their target genes in sense of numbers, types and biological relevance remain largely unclear. Dissecting the miRNA-target relationships will render more insights for miRNA targets identification and validation therefore promote the understanding of miRNA function. In miRBase, miRanda is the key algorithm used for target prediction for Zebrafish. This algorithm is high-throughput but brings lots of false positives (noise). Since validation of a large scale of targets through laboratory experiments is very time consuming, several computational methods for miRNA targets validation should be developed. In this paper, we present an integrative method to investigate several aspects of the relationships between miRNAs and their targets with the final purpose of extracting high confident targets from miRanda predicted targets pool. This is achieved by using the techniques ranging from statistical tests to clustering and association rules. Our research focuses on Zebrafish. It was found that validated targets do not necessarily associate with the highest sequence matching. Besides, for some miRNA families, the frequency of their predicted targets is significantly higher in the genomic region nearby their own physical location. Finally, in a case study of dre-miR-10 and dre-miR-196, it was found that the predicted target genes hoxd13a, hoxd11a, hoxd10a and hoxc4a of dre-miR- 10 while hoxa9a, hoxc8a and hoxa13a of dre-miR-196 have similar characteristics as validated target genes and therefore represent high confidence target candidates.

Keywords: MicroRNA targets validation, microRNA-target relationships, dre-miR-10, dre-miR-196.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
4146 Selecting Negative Examples for Protein-Protein Interaction

Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae

Abstract:

Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.

Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
4145 Using Structural Equation Modeling in Causal Relationship Design for Balanced-Scorecards' Strategic Map

Authors: A. Saghaei, R. Ghasemi

Abstract:

Through 1980s, management accounting researchers described the increasing irrelevance of traditional control and performance measurement systems. The Balanced Scorecard (BSC) is a critical business tool for a lot of organizations. It is a performance measurement system which translates mission and strategy into objectives. Strategy map approach is a development variant of BSC in which some necessary causal relations must be established. To recognize these relations, experts usually use experience. It is also possible to utilize regression for the same purpose. Structural Equation Modeling (SEM), which is one of the most powerful methods of multivariate data analysis, obtains more appropriate results than traditional methods such as regression. In the present paper, we propose SEM for the first time to identify the relations between objectives in the strategy map, and a test to measure the importance of relations. In SEM, factor analysis and test of hypotheses are done in the same analysis. SEM is known to be better than other techniques at supporting analysis and reporting. Our approach provides a framework which permits the experts to design the strategy map by applying a comprehensive and scientific method together with their experience. Therefore this scheme is a more reliable method in comparison with the previously established methods.

Keywords: BSC, SEM, Strategy map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2690
4144 Select-Low and Select-High Methods for the Wheeled Robot Dynamic States Control

Authors: Bogusław Schreyer

Abstract:

The paper enquires on the two methods of the wheeled robot braking torque control. Those two methods are applied when the adhesion coefficient under left side wheels is different from the adhesion coefficient under the right side wheels. In case of the select-low (SL) method the braking torque on both wheels is controlled by the signals originating from the wheels on the side of the lower adhesion. In the select-high (SH) method the torque is controlled by the signals originating from the wheels on the side of the higher adhesion. The SL method is securing stable and secure robot behaviors during the braking process. However, the efficiency of this method is relatively low. The SH method is more efficient in terms of time and braking distance but in some situations may cause wheels blocking. It is important to monitor the velocity of all wheels and then take a decision about the braking torque distribution accordingly. In case of the SH method the braking torque slope may require significant decrease in order to avoid wheel blocking.

Keywords: Select-high method, select-low method, torque distribution, wheeled robot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 461
4143 Various Speech Processing Techniques For Speech Compression And Recognition

Authors: Jalal Karam

Abstract:

Years of extensive research in the field of speech processing for compression and recognition in the last five decades, resulted in a severe competition among the various methods and paradigms introduced. In this paper we include the different representations of speech in the time-frequency and time-scale domains for the purpose of compression and recognition. The examination of these representations in a variety of related work is accomplished. In particular, we emphasize methods related to Fourier analysis paradigms and wavelet based ones along with the advantages and disadvantages of both approaches.

Keywords: Time-Scale, Wavelets, Time-Frequency, Compression, Recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2316
4142 A Study of Various Numerical Turbulence Modeling Methods in Boundary Layer Excitation of a Square Ribbed Channel

Authors: Hojjat Saberinejad, Adel Hashiehbaf, Ehsan Afrasiabian

Abstract:

Among the various cooling processes in industrial applications such as: electronic devices, heat exchangers, gas turbines, etc. Gas turbine blades cooling is the most challenging one. One of the most common practices is using ribbed wall because of the boundary layer excitation and therefore making the ultimate cooling. Vortex formation between rib and channel wall will result in a complicated behavior of flow regime. At the other hand, selecting the most efficient method for capturing the best results comparing to experimental works would be a fascinating issue. In this paper 4 common methods in turbulence modeling: standard k-e, rationalized k-e with enhanced wall boundary layer treatment, k-w and RSM (Reynolds stress model) are employed to a square ribbed channel to investigate the separation and thermal behavior of the flow in the channel. Finally all results from different methods which are used in this paper will be compared with experimental data available in literature to ensure the numerical method accuracy.

Keywords: boundary layer, turbulence, numerical method, rib cooling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1679
4141 A Comparative Study of Malware Detection Techniques Using Machine Learning Methods

Authors: Cristina Vatamanu, Doina Cosovan, Dragoş Gavriluţ, Henri Luchian

Abstract:

In the past few years, the amount of malicious software increased exponentially and, therefore, machine learning algorithms became instrumental in identifying clean and malware files through (semi)-automated classification. When working with very large datasets, the major challenge is to reach both a very high malware detection rate and a very low false positive rate. Another challenge is to minimize the time needed for the machine learning algorithm to do so. This paper presents a comparative study between different machine learning techniques such as linear classifiers, ensembles, decision trees or various hybrids thereof. The training dataset consists of approximately 2 million clean files and 200.000 infected files, which is a realistic quantitative mixture. The paper investigates the above mentioned methods with respect to both their performance (detection rate and false positive rate) and their practicability.

Keywords: Detection Rate, False Positives, Perceptron, One Side Class, Ensembles, Decision Tree, Hybrid methods, Feature Selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3267
4140 Dynamic Safety-Stock Calculation

Authors: Julian Becker, Wiebke Hartmann, Sebastian Bertsch, Johannes Nywlt, Matthias Schmidt

Abstract:

In order to ensure a high service level industrial enterprises have to maintain safety-stock that directly influences the economic efficiency at the same time. This paper analyses established mathematical methods to calculate safety-stock. Therefore, the performance measured in stock and service level is appraised and the limits of several methods are depicted. Afterwards, a new dynamic approach is presented to gain an extensive method to calculate safety-stock that also takes the knowledge of future volatility into account.

Keywords: Inventory dimensioning, material requirement planning, safety-stock calculation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6849
4139 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector

Authors: Mariam Vardiashvili

Abstract:

The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity.  When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and  Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector  as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating  impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.

Keywords: Non-cash-generating assets, cash-generating assets, recoverable value, recoverable service amount, value in use.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 681
4138 Methods of Geodesic Distance in Two-Dimensional Face Recognition

Authors: Rachid Ahdid, Said Safi, Bouzid Manaut

Abstract:

In this paper, we present a comparative study of three methods of 2D face recognition system such as: Iso-Geodesic Curves (IGC), Geodesic Distance (GD) and Geodesic-Intensity Histogram (GIH). These approaches are based on computing of geodesic distance between points of facial surface and between facial curves. In this study we represented the image at gray level as a 2D surface in a 3D space, with the third coordinate proportional to the intensity values of pixels. In the classifying step, we use: Neural Networks (NN), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). The images used in our experiments are from two wellknown databases of face images ORL and YaleB. ORL data base was used to evaluate the performance of methods under conditions where the pose and sample size are varied, and the database YaleB was used to examine the performance of the systems when the facial expressions and lighting are varied.

Keywords: 2D face recognition, Geodesic distance, Iso-Geodesic Curves, Geodesic-Intensity Histogram, facial surface, Neural Networks, K-Nearest Neighbor, Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
4137 Design for Manufacturability and Concurrent Engineering for Product Development

Authors: Alemu Moges Belay

Abstract:

In the 1980s, companies began to feel the effect of three major influences on their product development: newer and innovative technologies, increasing product complexity and larger organizations. And therefore companies were forced to look for new product development methods. This paper tries to focus on the two of new product development methods (DFM and CE). The aim of this paper is to see and analyze different product development methods specifically on Design for Manufacturability and Concurrent Engineering. Companies can achieve and be benefited by minimizing product life cycle, cost and meeting delivery schedule. This paper also presents simplified models that can be modified and used by different companies based on the companies- objective and requirements. Methodologies that are followed to do this research are case studies. Two companies were taken and analysed on the product development process. Historical data, interview were conducted on these companies in addition to that, Survey of literatures and previous research works on similar topics has been done during this research. This paper also tries to show the implementation cost benefit analysis and tries to calculate the implementation time. From this research, it has been found that the two companies did not achieve the delivery time to the customer. Some of most frequently coming products are analyzed and 50% to 80 % of their products are not delivered on time to the customers. The companies are following the traditional way of product development that is sequentially design and production method, which highly affect time to market. In the case study it is found that by implementing these new methods and by forming multi disciplinary team in designing and quality inspection; the company can reduce the workflow steps from 40 to 30.

Keywords: Design for manufacturability, Concurrent Engineering, Time-to-Market, Product development

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5566
4136 An Approach to Capture, Evaluate and Handle Complexity of Engineering Change Occurrences in New Product Development

Authors: Mohammad Rostami Mehr, Seyed Arya Mir Rashed, Arndt Lueder, Magdalena Mißler-Behr

Abstract:

This paper represents the conception that complex problems do not necessary need similar complex solutions in order to cope with the complexity. Furthermore, a simple solution based on established methods can provide a sufficient way dealing with the complexity. To verify this conception, the presented paper focuses on the field of change management as a part of new product development process in automotive sector. In the field of complexity management, dealing with increasing complexity is essential, while, only non-flexible rigid processes that are not designed to handle complexity are available. The basic methodology of this paper can be divided in four main sections: 1) analyzing the complexity of the change management, 2) literature review in order to identify potential solutions and methods, 3) capturing and implementing expertise of experts from change management filed of an automobile manufacturing company and 4) systematical comparison of the identified methods from literature and connecting these with defined requirements of the complexity of the change management in order to develop a solution. As a practical outcome, this paper provides a method to capture the complexity of engineering changes (EC) and includes it within the EC evaluation process, following case-related process guidance to cope with the complexity. Furthermore, this approach supports the conception that dealing with complexity is possible while utilizing rather simple and established methods by combining them in to a powerful tool.

Keywords: complexity management, new product development, engineering change management, flexibility

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 524
4135 Revealing Nonlinear Couplings between Oscillators from Time Series

Authors: B.P. Bezruchko, D.A. Smirnov

Abstract:

Quantitative characterization of nonlinear directional couplings between stochastic oscillators from data is considered. We suggest coupling characteristics readily interpreted from a physical viewpoint and their estimators. An expression for a statistical significance level is derived analytically that allows reliable coupling detection from a relatively short time series. Performance of the technique is demonstrated in numerical experiments.

Keywords: Nonlinear time series analysis, directional couplings, coupled oscillators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1244
4134 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver and Pancreatic Grafts

Authors: C. S. Mammas, A. Lazaris, A. S. Mamma-Graham, G. Kostopanagiotou, C. Lemonidou, J. Mantas, E. Patsouris

Abstract:

Introduction: The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the remote evaluation of the grafts. It has been estimated that even in well-organized transplant systems an average of 8% to 14% of the grafts (G) that arrive at the recipient hospitals may be considered as diseased, injured, damaged or improper for transplantation. Digital microscopy adds information on a microscopic level about the grafts in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G, will arrive at the recipient hospital for implantation. Aim: Ergonomics of Digital Microscopy (DM) based on virtual slides, on Telemedicine Systems (TS) for Tele-Pathological (TPE) evaluation of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of Renal Graft (RG), Liver Graft (LG) and Pancreatic Graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying Virtual Slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included: a. Development of an OTE-TS similar Experimental Telemedicine System (Exp.-TS), b. Simulation of the integration of TS with the VS based microscopic TPE of RG, LG and PG applying DM. Simulation of the DM based TPE was performed by 2 specialists on a total of 238 human Renal Graft (RG), 172 Liver Graft (LG) and 108 Pancreatic Graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to diagnose accurately the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a Desktop, followed by the ES of the applied Exp.-TS. Tablet and Mobile-Phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and aware ness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval; seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.

Keywords: Organ Transplantation, Tele-Pathology, Digital Microscopy, Virtual Slides.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1878
4133 A Robust Method for Finding Nearest-Neighbor using Hexagon Cells

Authors: Ahmad Attiq Al-Ogaibi, Ahmad Sharieh, Moh’d Belal Al-Zoubi, R. Bremananth

Abstract:

In pattern clustering, nearest neighborhood point computation is a challenging issue for many applications in the area of research such as Remote Sensing, Computer Vision, Pattern Recognition and Statistical Imaging. Nearest neighborhood computation is an essential computation for providing sufficient classification among the volume of pixels (voxels) in order to localize the active-region-of-interests (AROI). Furthermore, it is needed to compute spatial metric relationships of diverse area of imaging based on the applications of pattern recognition. In this paper, we propose a new methodology for finding the nearest neighbor point, depending on making a virtually grid of a hexagon cells, then locate every point beneath them. An algorithm is suggested for minimizing the computation and increasing the turnaround time of the process. The nearest neighbor query points Φ are fetched by seeking fashion of hexagon holistic. Seeking will be repeated until an AROI Φ is to be expected. If any point Υ is located then searching starts in the nearest hexagons in a circular way. The First hexagon is considered be level 0 (L0) and the surrounded hexagons is level 1 (L1). If Υ is located in L1, then search starts in the next level (L2) to ensure that Υ is the nearest neighbor for Φ. Based on the result and experimental results, we found that the proposed method has an advantage over the traditional methods in terms of minimizing the time complexity required for searching the neighbors, in turn, efficiency of classification will be improved sufficiently.

Keywords: Hexagon cells, k-nearest neighbors, Nearest Neighbor, Pattern recognition, Query pattern, Virtually grid

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2779