Search results for: fitting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 340

Search results for: fitting

310 Prediction of Permeability of Frozen Unsaturated Soil Using Van Genuchten Model and Fredlund-Xing Model in Soil Vision

Authors: Bhavita S. Dave, Jaimin Vaidya, Chandresh H. Solanki, Atul K.

Abstract:

To measure the permeability of a soil specimen, one of the basic assumptions of Darcy's law is that the soil sample should be saturated. Unlike saturated soils, the permeability of unsaturated soils cannot be found using conventional methods as it does not follow Darcy's law. Many empirical models, such as the Van Genuchten Model and Fredlund-Xing Model were suggested to predict permeability value for unsaturated soil. Such models use data from the soil-freezing characteristic curve to find fitting parameters for frozen unsaturated soils. In this study, soil specimens were subjected to 0, 1, 3, and 5 freezing-thawing (F-T) cycles for different degrees of saturation to have a wide range of suction, and its soil freezing characteristic curves were formulated for all F-T cycles. Changes in fitting parameters and relative permeability with subsequent F-T cycles are presented in this paper for both models.

Keywords: frozen unsaturated soil, Fredlund Xing model, soil-freezing characteristic curve, Van Genuchten model

Procedia PDF Downloads 163
309 Investigation of Garment Fit Using Virtual Try-On Technology

Authors: Kristina Ancutiene, Agne Lage, Ada Gulbiniene

Abstract:

Virtual garment fitting has gotten considerable attention for researchers currently. Virtual try-on technologies provide the opportunity to check garment fit using various fabrics and sizes. Differences in fabric mechanical properties produce differences in garment fit. This research aimed to investigate the virtual garment fit concerning the fabric's mechanical properties by determining distance ease between the body and the garment. In this research, virtual women mannequin was covered with straight fit virtual dress stitched in Modaris 3D (CAD Lectra). Garment fitting was investigated using seven cotton/cotton blended plain weave fabrics. Ease allowance value at bust, waist and hip girths in 2D basic patterns was changed uniformly from 0 cm to 8 cm. The values of distance ease in 3D virtual garments at the three main girths were investigated. Distance ease distribution in the virtual garment was investigated also. It was defined that by increasing of 2D patterns ease allowance, 3D garment distance ease changes proportionally but differently using various fabrics. Correlation analysis between 3D garment ease and mechanical properties showed that tensile strain in weft direction had the strongest relation.

Keywords: 3D CAD, distance ease, fabric, garment fit, virtual try-on

Procedia PDF Downloads 135
308 Techniques to Characterize Subpopulations among Hearing Impaired Patients and Its Impact for Hearing Aid Fitting

Authors: Vijaya K. Narne, Gerard Loquet, Tobias Piechowiak, Dorte Hammershoi, Jesper H. Schmidt

Abstract:

BEAR, which stands for better hearing rehabilitation is a large-scale project in Denmark designed and executed by three national universities, three hospitals, and the hearing aid industry with the aim to improve hearing aid fitting. A total of 1963 hearing impaired people were included and were segmented into subgroups based on hearing-loss, demographics, audiological and questionnaires data (i.e., the speech, spatial and qualities of hearing scale [SSQ-12] and the International Outcome Inventory for Hearing-Aids [IOI-HA]). With the aim to provide a better hearing-aid fit to individual patients, we applied modern machine learning techniques with traditional audiograms rule-based systems. Results show that age, speech discrimination scores, and audiogram configurations were evolved as important parameters in characterizing sub-population from the data-set. The attempt to characterize sub-population reveal a clearer picture about the individual hearing difficulties encountered and the benefits derived from more individualized hearing aids.

Keywords: hearing loss, audiological data, machine learning, hearing aids

Procedia PDF Downloads 126
307 Geospatial Curve Fitting Methods for Disease Mapping of Tuberculosis in Eastern Cape Province, South Africa

Authors: Davies Obaromi, Qin Yongsong, James Ndege

Abstract:

To interpolate scattered or regularly distributed data, there are imprecise or exact methods. However, there are some of these methods that could be used for interpolating data in a regular grid and others in an irregular grid. In spatial epidemiology, it is important to examine how a disease prevalence rates are distributed in space, and how they relate with each other within a defined distance and direction. In this study, for the geographic and graphic representation of the disease prevalence, linear and biharmonic spline methods were implemented in MATLAB, and used to identify, localize and compare for smoothing in the distribution patterns of tuberculosis (TB) in Eastern Cape Province. The aim of this study is to produce a more “smooth” graphical disease map for TB prevalence patterns by a 3-D curve fitting techniques, especially the biharmonic splines that can suppress noise easily, by seeking a least-squares fit rather than exact interpolation. The datasets are represented generally as a 3D or XYZ triplets, where X and Y are the spatial coordinates and Z is the variable of interest and in this case, TB counts in the province. This smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function, and it has also become the conventional method for its high precision, simplicity and flexibility. Surface and contour plots are produced for the TB prevalence at the provincial level for 2012 – 2015. From the results, the general outlook of all the fittings showed a systematic pattern in the distribution of TB cases in the province and this is consistent with some spatial statistical analyses carried out in the province. This new method is rarely used in disease mapping applications, but it has a superior advantage to be assessed at subjective locations rather than only on a rectangular grid as seen in most traditional GIS methods of geospatial analyses.

Keywords: linear, biharmonic splines, tuberculosis, South Africa

Procedia PDF Downloads 214
306 Cultural Embeddedness of E-Participation Methods in Hungary

Authors: Hajnalka Szarvas

Abstract:

The research examines the effectiveness of e-participation tools and methods from a point of view of cultural fitting to the Hungarian community traditions. Participation can have very different meanings depending on the local cultural and historical traditions, experiences of the certain societies. Generally when it is about e-democracy or e-participation tools most of the researches are dealing with its technological sides and novelties, but there is not much said about the cultural and social context of the different platforms. However from the perspective of their success it would be essential to look at the human factor too, the actual users, how the certain DMS or any online platform is fitting to the way of thought, the way of functioning of the certain society. Therefore the paper will explore that to what extent the different online platforms like Loomio, Democracy OS, Your Priorities EVoks, Populus, miutcank.hu, Liquid Democracy, Brain Bar Budapest Lab are compatible with the Hungarian mental structures and community traditions, the contents of collective mind about community functioning. As a result the influence of cultural embeddedness of the logic of e-participation development tools on success of these methods will be clearly seen. Furthermore the most crucial factors in general which determine the efficiency of e-participation development tools in Hungary will be demonstrated.

Keywords: cultural embeddedness, e-participation, local community traditions, mental structures

Procedia PDF Downloads 269
305 Modelling of Heat Generation in a 18650 Lithium-Ion Battery Cell under Varying Discharge Rates

Authors: Foo Shen Hwang, Thomas Confrey, Stephen Scully, Barry Flannery

Abstract:

Thermal characterization plays an important role in battery pack design. Lithium-ion batteries have to be maintained between 15-35 °C to operate optimally. Heat is generated (Q) internally within the batteries during both the charging and discharging phases. This can be quantified using several standard methods. The most common method of calculating the batteries heat generation is through the addition of both the joule heating effects and the entropic changes across the battery. In addition, such values can be derived by identifying the open-circuit voltage (OCV), nominal voltage (V), operating current (I), battery temperature (T) and the rate of change of the open-circuit voltage in relation to temperature (dOCV/dT). This paper focuses on experimental characterization and comparative modelling of the heat generation rate (Q) across several current discharge rates (0.5C, 1C, and 1.5C) of a 18650 cell. The analysis is conducted utilizing several non-linear mathematical functions methods, including polynomial, exponential, and power models. Parameter fitting is carried out over the respective function orders; polynomial (n = 3~7), exponential (n = 2) and power function. The generated parameter fitting functions are then used as heat source functions in a 3-D computational fluid dynamics (CFD) solver under natural convection conditions. Generated temperature profiles are analyzed for errors based on experimental discharge tests, conducted at standard room temperature (25°C). Initial experimental results display low deviation between both experimental and CFD temperature plots. As such, the heat generation function formulated could be easier utilized for larger battery applications than other methods available.

Keywords: computational fluid dynamics, curve fitting, lithium-ion battery, voltage drop

Procedia PDF Downloads 65
304 Research on the Spatio-Temporal Evolution Pattern of Traffic Dominance in Shaanxi Province

Authors: Leng Jian-Wei, Wang Lai-Jun, Li Ye

Abstract:

In order to measure and analyze the transportation situation within the counties of Shaanxi province over a certain period of time and to promote the province's future transportation planning and development, this paper proposes a reasonable layout plan and compares model rationality. The study uses entropy weight method to measure the transportation advantages of 107 counties in Shaanxi province from three dimensions: road network density, trunk line influence and location advantage in 2013 and 2021, and applies spatial autocorrelation analysis method to analyze the spatial layout and development trend of county-level transportation, and conducts ordinary least square (OLS)regression on transportation impact factors and other influencing factors. The paper also compares the regression fitting degree of the Geographically weighted regression(GWR) model and the OLS model. The results show that spatially, the transportation advantages of Shaanxi province generally show a decreasing trend from the Weihe Plain to the surrounding areas and mainly exhibit high-high clustering phenomenon. Temporally, transportation advantages show an overall upward trend, and the phenomenon of spatial imbalance gradually decreases. People's travel demands have changed to some extent, and the demand for rapid transportation has increased overall. The GWR model regression fitting degree of transportation advantages is 0.74, which is higher than the OLS regression model's fitting degree of 0.64. Based on the evolution of transportation advantages, it is predicted that this trend will continue for a period of time in the future. To improve the transportation advantages of Shaanxi province increasing the layout of rapid transportation can effectively enhance the transportation advantages of Shaanxi province. When analyzing spatial heterogeneity, geographic factors should be considered to establish a more reliable model

Keywords: traffic dominance, GWR model, spatial autocorrelation analysis, temporal and spatial evolution

Procedia PDF Downloads 62
303 Research on Sensing Performance of Polyimide-Based Composite Materials

Authors: Rui Zhao, Dongxu Zhang, Min Wan

Abstract:

Composite materials are widely used in the fields of aviation, aerospace, and transportation due to their lightweight and high strength. Functionalization of composite structures is a hot topic in the future development of composite materials. This article proposed a polyimide-resin based composite material with a sensing function. This material can serve as a sensor to achieve deformation monitoring of metal sheets in room temperature environments. In the deformation process of metal sheets, the slope of the linear fitting line for the corresponding material resistance change rate is different in the elastic stage and the plastic strengthening stage. Therefore, the slope of the material resistance change rate can be used to characterize the deformation stage of the metal sheet. In addition, the resistance change rate of the material exhibited a good negative linear relationship with temperature in a high-temperature environment, and the determination coefficient of the linear fitting line for the change rate of material resistance in the range of 520-650℃ was 0.99. These results indicate that the material has the potential to be applied in the monitoring of mechanical properties of structural materials and temperature monitoring of high-temperature environments.

Keywords: polyimide, composite, sensing, resistance change rate

Procedia PDF Downloads 41
302 Comparison of Power Generation Status of Photovoltaic Systems under Different Weather Conditions

Authors: Zhaojun Wang, Zongdi Sun, Qinqin Cui, Xingwan Ren

Abstract:

Based on multivariate statistical analysis theory, this paper uses the principal component analysis method, Mahalanobis distance analysis method and fitting method to establish the photovoltaic health model to evaluate the health of photovoltaic panels. First of all, according to weather conditions, the photovoltaic panel variable data are classified into five categories: sunny, cloudy, rainy, foggy, overcast. The health of photovoltaic panels in these five types of weather is studied. Secondly, a scatterplot of the relationship between the amount of electricity produced by each kind of weather and other variables was plotted. It was found that the amount of electricity generated by photovoltaic panels has a significant nonlinear relationship with time. The fitting method was used to fit the relationship between the amount of weather generated and the time, and the nonlinear equation was obtained. Then, using the principal component analysis method to analyze the independent variables under five kinds of weather conditions, according to the Kaiser-Meyer-Olkin test, it was found that three types of weather such as overcast, foggy, and sunny meet the conditions for factor analysis, while cloudy and rainy weather do not satisfy the conditions for factor analysis. Therefore, through the principal component analysis method, the main components of overcast weather are temperature, AQI, and pm2.5. The main component of foggy weather is temperature, and the main components of sunny weather are temperature, AQI, and pm2.5. Cloudy and rainy weather require analysis of all of their variables, namely temperature, AQI, pm2.5, solar radiation intensity and time. Finally, taking the variable values in sunny weather as observed values, taking the main components of cloudy, foggy, overcast and rainy weather as sample data, the Mahalanobis distances between observed value and these sample values are obtained. A comparative analysis was carried out to compare the degree of deviation of the Mahalanobis distance to determine the health of the photovoltaic panels under different weather conditions. It was found that the weather conditions in which the Mahalanobis distance fluctuations ranged from small to large were: foggy, cloudy, overcast and rainy.

Keywords: fitting, principal component analysis, Mahalanobis distance, SPSS, MATLAB

Procedia PDF Downloads 108
301 Using T-Splines to Model Point Clouds from Terrestrial Laser Scanner

Authors: G. Kermarrec, J. Hartmann

Abstract:

Spline surfaces are a major representation of freeform surfaces in the computer-aided graphic industry and were recently introduced in the field of geodesy for processing point clouds from terrestrial laser scanner (TLS). The surface fitting consists of approximating a trustworthy mathematical surface to a large numbered 3D point cloud. The standard B-spline surfaces lack of local refinement due to the tensor-product construction. The consequences are oscillating geometry, particularly in the transition from low-to-high curvature parts for scattered point clouds with missing data. More economic alternatives in terms of parameters on how to handle point clouds with a huge amount of observations are the recently introduced T-splines. As long as the partition of unity is guaranteed, their computational complexity is low, and they are flexible. T-splines are implemented in a commercial package called Rhino, a 3D modeler which is widely used in computer aided design to create and animate NURBS objects. We have applied T-splines surface fitting to terrestrial laser scanner point clouds from a bridge under load and a sheet pile wall with noisy observations. We will highlight their potential for modelling details with high trustworthiness, paving the way for further applications in terms of deformation analysis.

Keywords: deformation analysis, surface modelling, terrestrial laser scanner, T-splines

Procedia PDF Downloads 113
300 Parametric Modeling for Survival Data with Competing Risks Using the Generalized Gompertz Distribution

Authors: Noora Al-Shanfari, M. Mazharul Islam

Abstract:

The cumulative incidence function (CIF) is a fundamental approach for analyzing survival data in the presence of competing risks, which estimates the marginal probability for each competing event. Parametric modeling of CIF has the advantage of fitting various shapes of CIF and estimates the impact of covariates with maximum efficiency. To calculate the total CIF's covariate influence using a parametric model., it is essential to parametrize the baseline of the CIF. As the CIF is an improper function by nature, it is necessary to utilize an improper distribution when applying parametric models. The Gompertz distribution, which is an improper distribution, is limited in its applicability as it only accounts for monotone hazard shapes. The generalized Gompertz distribution, however, can adapt to a wider range of hazard shapes, including unimodal, bathtub, and monotonic increasing or decreasing hazard shapes. In this paper, the generalized Gompertz distribution is used to parametrize the baseline of the CIF, and the parameters of the proposed model are estimated using the maximum likelihood approach. The proposed model is compared with the existing Gompertz model using the Akaike information criterion. Appropriate statistical test procedures and model-fitting criteria will be used to test the adequacy of the model. Both models are applied to the ‘colon’ dataset, which is available in the “biostat3” package in R.

Keywords: competing risks, cumulative incidence function, improper distribution, parametric modeling, survival analysis

Procedia PDF Downloads 58
299 A Posteriori Trading-Inspired Model-Free Time Series Segmentation

Authors: Plessen Mogens Graf

Abstract:

Within the context of multivariate time series segmentation, this paper proposes a method inspired by a posteriori optimal trading. After a normalization step, time series are treated channelwise as surrogate stock prices that can be traded optimally a posteriori in a virtual portfolio holding either stock or cash. Linear transaction costs are interpreted as hyperparameters for noise filtering. Trading signals, as well as trading signals obtained on the reversed time series, are used for unsupervised channelwise labeling before a consensus over all channels is reached that determines the final segmentation time instants. The method is model-free such that no model prescriptions for segments are made. Benefits of proposed approach include simplicity, computational efficiency, and adaptability to a wide range of different shapes of time series. Performance is demonstrated on synthetic and real-world data, including a large-scale dataset comprising a multivariate time series of dimension 1000 and length 2709. Proposed method is compared to a popular model-based bottom-up approach fitting piecewise affine models and to a recent model-based top-down approach fitting Gaussian models and found to be consistently faster while producing more intuitive results in the sense of segmenting time series at peaks and valleys.

Keywords: time series segmentation, model-free, trading-inspired, multivariate data

Procedia PDF Downloads 107
298 Screening Deformed Red Blood Cells Irradiated by Ionizing Radiations Using Windowed Fourier Transform

Authors: Dahi Ghareab Abdelsalam Ibrahim, R. H. Bakr

Abstract:

Ionizing radiation, such as gamma radiation and X-rays, has many applications in medical diagnoses and cancer treatment. In this paper, we used the windowed Fourier transform to extract the complex image of the deformed red blood cells. The real values of the complex image are used to extract the best fitting of the deformed cell boundary. Male albino rats are irradiated by γ-rays from ⁶⁰Co. The male albino rats are anesthetized with ether, and then blood samples are collected from the eye vein by heparinized capillary tubes for studying the radiation-damaging effect in-vivo by the proposed windowed Fourier transform. The peripheral blood films are prepared according to the Brown method. The peripheral blood film is photographed by using an Automatic Image Contour Analysis system (SAMICA) from ELBEK-Bildanalyse GmbH, Siegen, Germany. The SAMICA system is provided with an electronic camera connected to a computer through a built-in interface card, and the image can be magnified up to 1200 times and displayed by the computer. The images of the peripheral blood films are then analyzed by the windowed Fourier transform method to extract the precise deformation from the best fitting. Based on accurate deformation evaluation of the red blood cells, diseases can be diagnosed in their primary stages.

Keywords: windowed Fourier transform, red blood cells, phase wrapping, Image processing

Procedia PDF Downloads 54
297 Fluorescent Analysis of Gold Nanoclusters-Wool Keratin Addition to Copper Ions

Authors: Yao Xing, Hong Ling Liu, Wei Dong Yu

Abstract:

With the increase of global population, it is of importance for the safe water supply, while, the water-monitoring method with the capability of rapidness, low-cost, green and robustness remains unsolved. In this paper, gold nanoclusters-wool keratin is added into copper ions measured by fluorescent method in order to probe copper ions in aqueous solution. The fluorescent results show that gold nanoclusters-wool keratin exhibits high stability of pHs, while it is sensitive to temperature and time. Based on Gauss fitting method, the results exhibit that the slope of gold nanoclusters-wool keratin with pH resolution under acidic condition is higher compared to it under alkaline solutions. Besides, gold nanoclusters-wool keratin added into copper ions shows a fluorescence turn-off response transferring from red to blue under UV light, leading to the dramatically decreased fluorescent intensity of gold nanoclusters-wool keratin solution located at 690 nm. Moreover, the limited concentration of copper ions tested by gold nanoclusters-wool keratin system is around 1 µmol/L, which meets the need of detection standards. The fitting slope of Stern-Volmer plot at low concentration of copper ions is larger than it at high concentrations, which indicates that aggregated gold nanoclusters are from small amounts to large numbers with the increasing concentration of copper ions. It is expected to provide novel method and materials for copper ions testing with low cost, high efficiency, and easy operability.

Keywords: gold nanoclusters, copper ions, wool keratin, fluorescence

Procedia PDF Downloads 224
296 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling

Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo

Abstract:

Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.

Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield

Procedia PDF Downloads 404
295 Influence of Hearing Aids on Non-medically Treatable Deafness

Authors: Donatien Niragira

Abstract:

The progress of technology creates new expectations for patients. The world of deafness is no exception. In recent years, there have been considerable advances in the field of technologies aimed at assisting failing hearing. According to the usual medical vocabulary, hearing aids are actually orthotics. They do not replace an organ but compensate for a functional impairment. The Amplifier Hearing amplification is useful for a large number of people with hearing loss. Hearing aids restore speech audibility. However, their benefits vary depending on the quality of residual hearing. The hearing aid is not a "cure" for deafness. It cannot correct all affected hearing abilities. It should be considered as an aid to communication. The urge to judge from the audiogram alone should be resisted here, as audiometry only indicates the ability to detect non-verbal sounds. To prevent hearing aids from ending up in the drawer, it is important to ensure that the patient's disability situations justify the use of this type of orthosis. If the problems of receptive Pre-fitting counseling are crucial: the person with hearing loss must be informed of the advantages and disadvantages of amplification in his or her case. Their expectations must be realistic. They also need to be aware that the adaptation process requires a good deal of patience and perseverance. They should be informed about the various models and types of hearing aids, including all the aesthetic, functional and financial considerations. If the person's motivation "survives" pre-fitting counseling, we are in the presence of a good candidate for amplification. In addition to its relevance, it shows that the results found in this study significantly improve the quality of audibility in the patient, from where this technology must be made accessible everywhere in the world.

Keywords: auditives protheses, hearing, aids, no medicaly treatable deafnes

Procedia PDF Downloads 24
294 Electricity Load Modeling: An Application to Italian Market

Authors: Giovanni Masala, Stefania Marica

Abstract:

Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.

Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression

Procedia PDF Downloads 375
293 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 313
292 Methodologies for Crack Initiation in Welded Joints Applied to Inspection Planning

Authors: Guang Zou, Kian Banisoleiman, Arturo González

Abstract:

Crack initiation and propagation threatens structural integrity of welded joints and normally inspections are assigned based on crack propagation models. However, the approach based on crack propagation models may not be applicable for some high-quality welded joints, because the initial flaws in them may be so small that it may take long time for the flaws to develop into a detectable size. This raises a concern regarding the inspection planning of high-quality welded joins, as there is no generally acceptable approach for modeling the whole fatigue process that includes the crack initiation period. In order to address the issue, this paper reviews treatment methods for crack initiation period and initial crack size in crack propagation models applied to inspection planning. Generally, there are four approaches, by: 1) Neglecting the crack initiation period and fitting a probabilistic distribution for initial crack size based on statistical data; 2) Extrapolating the crack propagation stage to a very small fictitious initial crack size, so that the whole fatigue process can be modeled by crack propagation models; 3) Assuming a fixed detectable initial crack size and fitting a probabilistic distribution for crack initiation time based on specimen tests; and, 4) Modeling the crack initiation and propagation stage separately using small crack growth theories and Paris law or similar models. The conclusion is that in view of trade-off between accuracy and computation efforts, calibration of a small fictitious initial crack size to S-N curves is the most efficient approach.

Keywords: crack initiation, fatigue reliability, inspection planning, welded joints

Procedia PDF Downloads 329
291 Influence of Hearing Aids on Non-Medically Treatable Deafness

Authors: Niragira Donatien

Abstract:

The progress of technology creates new expectations for patients. The world of deafness is no exception. In recent years, there have been considerable advances in the field of technologies aimed at assisting failing hearing. According to the usual medical vocabulary, hearing aids are actually orthotics. They do not replace an organ but compensate for a functional impairment. The amplifier hearing amplification is useful for a large number of people with hearing loss. Hearing aids restore speech audibility. However, their benefits vary depending on the quality of residual hearing. The hearing aid is not a "cure" for deafness. It cannot correct all affected hearing abilities. It should be considered as an aid to communicate who the best candidates for hearing aids are. The urge to judge from the audiogram alone should be resisted here, as audiometry only indicates the ability to detect non-verbal sounds. To prevent hearing aids from ending up in the drawer, it is important to ensure that the patient's disability situations justify the use of this type of orthosis. If the problems of receptive pre-fitting counselling are crucial, the person with hearing loss must be informed of the advantages and disadvantages of amplification in his or her case. Their expectations must be realistic. They also need to be aware that the adaptation process requires a good deal of patience and perseverance. They should be informed about the various models and types of hearing aids, including all the aesthetic, functional, and financial considerations. If the person's motivation "survives" pre-fitting counselling, we are in the presence of a good candidate for amplification. In addition to its relevance, hearing aids raise other questions: Should one or both ears be fitted? In short, all these questions show that the results found in this study significantly improve the quality of audibility in the patient, from where this technology must be made accessible everywhere in the world. So we want to progress with the technology.

Keywords: audiology, influence, hearing, madicaly, treatable

Procedia PDF Downloads 22
290 Development and Evaluation of a Psychological Adjustment and Adaptation Status Scale for Breast Cancer Survivors

Authors: Jing Chen, Jun-E Liu, Peng Yue

Abstract:

Objective: The objective of this study was to develop a psychological adjustment and adaptation status scale for breast cancer survivors, and to examine the reliability and validity of the scale. Method: 37 breast cancer survivors were recruited in qualitative research; a five-subject theoretical framework and an item pool of 150 items of the scale were derived from the interview data. In order to evaluate and select items and reach a preliminary validity and reliability for the original scale, the suggestions of study group members, experts and breast cancer survivors were taken, and statistical methods were used step by step in a sample of 457 breast cancer survivors. Results: An original 24-item scale was developed. The five dimensions “domestic affections”, “interpersonal relationship”, “attitude of life”, “health awareness”, “self-control/self-efficacy” explained 58.053% of the total variance. The content validity was assessed by experts, the CVI was 0.92. The construct validity was examined in a sample of 264 breast cancer survivors. The fitting indexes of confirmatory factor analysis (CFA) showed good fitting of the five dimensions model. The criterion-related validity of the total scale with PTGI was satisfactory (r=0.564, p<0.001). The internal consistency reliability and test-retest reliability were tested. Cronbach’s alpha value (0.911) showed a good internal consistency reliability, and the intraclass correlation coefficient (ICC=0.925, p<0.001) showed a satisfactory test-retest reliability. Conclusions: The scale was brief and easy to understand, was suitable for breast cancer patients whose physical strength and energy were limited.

Keywords: breast cancer survivors, rehabilitation, psychological adaption and adjustment, development of scale

Procedia PDF Downloads 490
289 A Variable Stiffness Approach to Vibration Control

Authors: S. A. Alotaibi, M. A. Al-Ajmi

Abstract:

This work introduces a new concept for controlling the mechanical vibrations via variable stiffness coil spring. The concept relies on fitting a screw though the spring to change the number of active spring coils. A prototype has been built and tested with promising results toward an innovation in the field of vibration control.

Keywords: variable stiffness, coil spring, vibration control, computer science

Procedia PDF Downloads 377
288 The Effect of Foundation on the Earth Fill Dam Settlement

Authors: Masoud Ghaemi, Mohammadjafar Hedayati, Faezeh Yousefzadeh, Hoseinali Heydarzadeh

Abstract:

Careful monitoring in the earth dams to measure deformation caused by settlement and movement has always been a concern for engineers in the field. In order to measure settlement and deformation of earth dams, usually, the precision instruments of settlement set and combined Inclinometer that is commonly referred to IS instrument will be used. In some dams, because the thickness of alluvium is high and there is no possibility of alluvium removal (technically and economically and in terms of performance), there is no possibility of placing the end of IS instrument (precision instruments of Inclinometer-settlement set) in the rock foundation. Inevitably, have to accept installing pipes in the weak and deformable alluvial foundation that leads to errors in the calculation of the actual settlement (absolute settlement) in different parts of the dam body. The purpose of this paper is to present new and refine criteria for predicting settlement and deformation in earth dams. The study is based on conditions in three dams with a deformation quite alluvial (Agh Chai, Narmashir and Gilan-e Gharb) to provide settlement criteria affected by the alluvial foundation. To achieve this goal, the settlement of dams was simulated by using the finite difference method with FLAC3D software, and then the modeling results were compared with the reading IS instrument. In the end, the caliber of the model and validate the results, by using regression analysis techniques and scrutinized modeling parameters with real situations and then by using MATLAB software and CURVE FITTING toolbox, new criteria for the settlement based on elasticity modulus, cohesion, friction angle, the density of earth dam and the alluvial foundation was obtained. The results of these studies show that, by using the new criteria measures, the amount of settlement and deformation for the dams with alluvial foundation can be corrected after instrument readings, and the error rate in reading IS instrument can be greatly reduced.

Keywords: earth-fill dam, foundation, settlement, finite difference, MATLAB, curve fitting

Procedia PDF Downloads 165
287 Determination of the Axial-Vector from an Extended Linear Sigma Model

Authors: Tarek Sayed Taha Ali

Abstract:

The dependence of the axial-vector coupling constant gA on the quark masses has been investigated in the frame work of the extended linear sigma model. The field equations have been solved in the mean-field approximation. Our study shows a better fitting to the experimental data compared with the existing models.

Keywords: extended linear sigma model, nucleon properties, axial coupling constant, physic

Procedia PDF Downloads 418
286 Validation of Escherichia coli O157:H7 Inactivation on Apple-Carrot Juice Treated with Manothermosonication by Kinetic Models

Authors: Ozan Kahraman, Hao Feng

Abstract:

Several models such as Weibull, Modified Gompertz, Biphasic linear, and Log-logistic models have been proposed in order to describe non-linear inactivation kinetics and used to fit non-linear inactivation data of several microorganisms for inactivation by heat, high pressure processing or pulsed electric field. First-order kinetic parameters (D-values and z-values) have often been used in order to identify microbial inactivation by non-thermal processing methods such as ultrasound. Most ultrasonic inactivation studies employed first-order kinetic parameters (D-values and z-values) in order to describe the reduction on microbial survival count. This study was conducted to analyze the E. coli O157:H7 inactivation data by using five microbial survival models (First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic). First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic kinetic models were used for fitting inactivation curves of Escherichia coli O157:H7. The residual sum of squares and the total sum of squares criteria were used to evaluate the models. The statistical indices of the kinetic models were used to fit inactivation data for E. coli O157:H7 by MTS at three temperatures (40, 50, and 60 0C) and three pressures (100, 200, and 300 kPa). Based on the statistical indices and visual observations, the Weibull and Biphasic models were best fitting of the data for MTS treatment as shown by high R2 values. The non-linear kinetic models, including the Modified Gompertz, First-order, and Log-logistic models did not provide any better fit to data from MTS compared the Weibull and Biphasic models. It was observed that the data found in this study did not follow the first-order kinetics. It is possibly because of the cells which are sensitive to ultrasound treatment were inactivated first, resulting in a fast inactivation period, while those resistant to ultrasound were killed slowly. The Weibull and biphasic models were found as more flexible in order to determine the survival curves of E. coli O157:H7 treated by MTS on apple-carrot juice.

Keywords: Weibull, Biphasic, MTS, kinetic models, E.coli O157:H7

Procedia PDF Downloads 340
285 Comparison of Sediment Rating Curve and Artificial Neural Network in Simulation of Suspended Sediment Load

Authors: Ahmad Saadiq, Neeraj Sahu

Abstract:

Sediment, which comprises of solid particles of mineral and organic material are transported by water. In river systems, the amount of sediment transported is controlled by both the transport capacity of the flow and the supply of sediment. The transport of sediment in rivers is important with respect to pollution, channel navigability, reservoir ageing, hydroelectric equipment longevity, fish habitat, river aesthetics and scientific interests. The sediment load transported in a river is a very complex hydrological phenomenon. Hence, sediment transport has attracted the attention of engineers from various aspects, and different methods have been used for its estimation. So, several experimental equations have been submitted by experts. Though the results of these methods have considerable differences with each other and with experimental observations, because the sediment measures have some limits, these equations can be used in estimating sediment load. In this present study, two black box models namely, an SRC (Sediment Rating Curve) and ANN (Artificial Neural Network) are used in the simulation of the suspended sediment load. The study is carried out for Seonath subbasin. Seonath is the biggest tributary of Mahanadi river, and it carries a vast amount of sediment. The data is collected for Jondhra hydrological observation station from India-WRIS (Water Resources Information System) and IMD (Indian Meteorological Department). These data include the discharge, sediment concentration and rainfall for 10 years. In this study, sediment load is estimated from the input parameters (discharge, rainfall, and past sediment) in various combination of simulations. A sediment rating curve used the water discharge to estimate the sediment concentration. This estimated sediment concentration is converted to sediment load. Likewise, for the application of these data in ANN, they are normalised first and then fed in various combinations to yield the sediment load. RMSE (root mean square error) and R² (coefficient of determination) between the observed load and the estimated load are used as evaluating criteria. For an ideal model, RMSE is zero and R² is 1. However, as the models used in this study are black box models, they don’t carry the exact representation of the factors which causes sedimentation. Hence, a model which gives the lowest RMSE and highest R² is the best model in this study. The lowest values of RMSE (based on normalised data) for sediment rating curve, feed forward back propagation, cascade forward back propagation and neural network fitting are 0.043425, 0.00679781, 0.0050089 and 0.0043727 respectively. The corresponding values of R² are 0.8258, 0.9941, 0.9968 and 0.9976. This implies that a neural network fitting model is superior to the other models used in this study. However, a drawback of neural network fitting is that it produces few negative estimates, which is not at all tolerable in the field of estimation of sediment load, and hence this model can’t be crowned as the best model among others, based on this study. A cascade forward back propagation produces results much closer to a neural network model and hence this model is the best model based on the present study.

Keywords: artificial neural network, Root mean squared error, sediment, sediment rating curve

Procedia PDF Downloads 298
284 Level Set Based Extraction and Update of Lake Contours Using Multi-Temporal Satellite Images

Authors: Yindi Zhao, Yun Zhang, Silu Xia, Lixin Wu

Abstract:

The contours and areas of water surfaces, especially lakes, often change due to natural disasters and construction activities. It is an effective way to extract and update water contours from satellite images using image processing algorithms. However, to produce optimal water surface contours that are close to true boundaries is still a challenging task. This paper compares the performances of three different level set models, including the Chan-Vese (CV) model, the signed pressure force (SPF) model, and the region-scalable fitting (RSF) energy model for extracting lake contours. After experiment testing, it is indicated that the RSF model, in which a region-scalable fitting (RSF) energy functional is defined and incorporated into a variational level set formulation, is superior to CV and SPF, and it can get desirable contour lines when there are “holes” in the regions of waters, such as the islands in the lake. Therefore, the RSF model is applied to extracting lake contours from Landsat satellite images. Four temporal Landsat satellite images of the years of 2000, 2005, 2010, and 2014 are used in our study. All of them were acquired in May, with the same path/row (121/036) covering Xuzhou City, Jiangsu Province, China. Firstly, the near infrared (NIR) band is selected for water extraction. Image registration is conducted on NIR bands of different temporal images for information update, and linear stretching is also done in order to distinguish water from other land cover types. Then for the first temporal image acquired in 2000, lake contours are extracted via the RSF model with initialization of user-defined rectangles. Afterwards, using the lake contours extracted the previous temporal image as the initialized values, lake contours are updated for the current temporal image by means of the RSF model. Meanwhile, the changed and unchanged lakes are also detected. The results show that great changes have taken place in two lakes, i.e. Dalong Lake and Panan Lake, and RSF can actually extract and effectively update lake contours using multi-temporal satellite image.

Keywords: level set model, multi-temporal image, lake contour extraction, contour update

Procedia PDF Downloads 336
283 Rainfall Estimation over Northern Tunisia by Combining Meteosat Second Generation Cloud Top Temperature and Tropical Rainfall Measuring Mission Microwave Imager Rain Rates

Authors: Saoussen Dhib, Chris M. Mannaerts, Zoubeida Bargaoui, Ben H. P. Maathuis, Petra Budde

Abstract:

In this study, a new method to delineate rain areas in northern Tunisia is presented. The proposed approach is based on the blending of the geostationary Meteosat Second Generation (MSG) infrared channel (IR) with the low-earth orbiting passive Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). To blend this two products, we need to apply two main steps. Firstly, we have to identify the rainy pixels. This step is achieved based on a classification using MSG channel IR 10.8 and the water vapor WV 0.62, applying a threshold on the temperature difference of less than 11 Kelvin which is an approximation of the clouds that have a high likelihood of precipitation. The second step consists on fitting the relation between IR cloud top temperature with the TMI rain rates. The correlation coefficient of these two variables has a negative tendency, meaning that with decreasing temperature there is an increase in rainfall intensity. The fitting equation will be applied for the whole day of MSG 15 minutes interval images which will be summed. To validate this combined product, daily extreme rainfall events occurred during the period 2007-2009 were selected, using a threshold criterion for large rainfall depth (> 50 mm/day) occurring at least at one rainfall station. Inverse distance interpolation method was applied to generate rainfall maps for the drier summer season (from May to October) and the wet winter season (from November to April). The evaluation results of the estimated rainfall combining MSG and TMI was very encouraging where all the events were detected rainy and the correlation coefficients were much better than previous evaluated products over the study area such as MSGMPE and PERSIANN products. The combined product showed a better performance during wet season. We notice also an overestimation of the maximal estimated rain for many events.

Keywords: combination, extreme, rainfall, TMI-MSG, Tunisia

Procedia PDF Downloads 146
282 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 123
281 Thermoluminescence Investigations of Tl2Ga2Se3S Layered Single Crystals

Authors: Serdar Delice, Mehmet Isik, Nizami Hasanli, Kadir Goksen

Abstract:

Researchers have donated great interest to ternary and quaternary semiconductor compounds especially with the improvement of the optoelectronic technology. The quaternary compound Tl2Ga2Se3S which was grown by Bridgman method carries the properties of ternary thallium chalcogenides group of semiconductors with layered structure. This compound can be formed from TlGaSe2 crystals replacing the one quarter of selenium atom by sulfur atom. Although Tl2Ga2Se3S crystals are not intentionally doped, some unintended defect types such as point defects, dislocations and stacking faults can occur during growth processes of crystals. These defects can cause undesirable problems in semiconductor materials especially produced for optoelectronic technology. Defects of various types in the semiconductor devices like LEDs and field effect transistor may act as a non-radiative or scattering center in electron transport. Also, quick recombination of holes with electrons without any energy transfer between charge carriers can occur due to the existence of defects. Therefore, the characterization of defects may help the researchers working in this field to produce high quality devices. Thermoluminescence (TL) is an effective experimental method to determine the kinetic parameters of trap centers due to defects in crystals. In this method, the sample is illuminated at low temperature by a light whose energy is bigger than the band gap of studied sample. Thus, charge carriers in the valence band are excited to delocalized band. Then, the charge carriers excited into conduction band are trapped. The trapped charge carriers are released by heating the sample gradually and these carriers then recombine with the opposite carriers at the recombination center. By this way, some luminescence is emitted from the samples. The emitted luminescence is converted to pulses by using an experimental setup controlled by computer program and TL spectrum is obtained. Defect characterization of Tl2Ga2Se3S single crystals has been performed by TL measurements at low temperatures between 10 and 300 K with various heating rate ranging from 0.6 to 1.0 K/s. The TL signal due to the luminescence from trap centers revealed one glow peak having maximum temperature of 36 K. Curve fitting and various heating rate methods were used for the analysis of the glow curve. The activation energy of 13 meV was found by the application of curve fitting method. This practical method established also that the trap center exhibits the characteristics of mixed (general) kinetic order. In addition, various heating rate analysis gave a compatible result (13 meV) with curve fitting as the temperature lag effect was taken into consideration. Since the studied crystals were not intentionally doped, these centers are thought to originate from stacking faults, which are quite possible in Tl2Ga2Se3S due to the weakness of the van der Waals forces between the layers. Distribution of traps was also investigated using an experimental method. A quasi-continuous distribution was attributed to the determined trap centers.

Keywords: chalcogenides, defects, thermoluminescence, trap centers

Procedia PDF Downloads 259