Search results for: principal curve
1493 Active Part of the Burnishing Tool Effect on the Physico-Geometric Aspect of the Superficial Layer of 100C6 and 16NC6 Steels
Authors: Tarek Litim, Ouahiba Taamallah
Abstract:
Burnishing is a mechanical surface treatment that combines several beneficial effects on the two steel grades studied. The application of burnishing to the ball or to the tip favors a better roughness compared to turning. In addition, it allows the consolidation of the surface layers through work hardening phenomena. The optimal effects are closely related to the treatment parameters and the active part of the device. With an improvement of 78% on the roughness, burnishing can be defined as a finishing operation in the machining range. With a 44% gain in consolidation rate, this treatment is an effective process for material consolidation. These effects are affected by several factors. The factors V, f, P, r, and i have the most significant effects on both roughness and hardness. Ball or tip burnishing leads to the consolidation of the surface layers of both grades 100C6 and 16NC6 steels by work hardening. For each steel grade and its mechanical treatment, the rational tensile curve has been drawn. Lüdwick's law is used to better plot the work hardening curve. For both grades, a material hardening law is established. For 100C6 steel, these results show a work hardening coefficient and a consolidation rate of 0.513 and 44, respectively, compared to the surface layers processed by turning. When 16NC6 steel is processed, the work hardening coefficient is about 0.29. Hardness tests characterize well the burnished depth. The layer affected by work hardening can reach up to 0.4 mm. Simulation of the tests is of great importance to provide the details at the local scale of the material. Conventional tensile curves provide a satisfactory indication of the toughness of 100C6 and 16NC6 materials. A simulation of the tensile curves revealed good agreement between the experimental and simulation results for both steels.Keywords: 100C6 steel, 16NC6 steel, burnishing, work hardening, roughness, hardness
Procedia PDF Downloads 1681492 Transcriptome Analysis Reveals Role of Long Non-Coding RNA NEAT1 in Dengue Patients
Authors: Abhaydeep Pandey, Shweta Shukla, Saptamita Goswami, Bhaswati Bandyopadhyay, Vishnampettai Ramachandran, Sudhanshu Vrati, Arup Banerjee
Abstract:
Background: Long non-coding RNAs (lncRNAs) are the important regulators of gene expression and play important role in viral replication and disease progression. The role of lncRNA genes in the pathogenesis of Dengue virus-mediated pathogenesis is currently unknown. Methods: To gain additional insights, we utilized an unbiased RNA sequencing followed by in silico analysis approach to identify the differentially expressed lncRNA and genes that are associated with dengue disease progression. Further, we focused our study on lncRNAs NEAT1 (Nuclear Paraspeckle Assembly Transcript 1) as it was found to be differentially expressed in PBMC of dengue infected patients. Results: The expression of lncRNAs NEAT1, as compared to dengue infection (DI), was significantly down-regulated as the patients developed the complication. Moreover, pairwise analysis on follow up patients confirmed that suppression of NEAT1 expression was associated with rapid fall in platelet count in dengue infected patients. Severe dengue patients (DS) (n=18; platelet count < 20K) when recovered from infection showing high NEAT1 expression as it observed in healthy donors. By co-expression network analysis and subsequent validation, we revealed that coding gene; IFI27 expression was significantly up-regulated in severe dengue cases and negatively correlated with NEAT1 expression. To discriminate DI from dengue severe, receiver operating characteristic (ROC) curve was calculated. It revealed sensitivity and specificity of 100% (95%CI: 85.69 – 97.22) and area under the curve (AUC) = 0.97 for NEAT1. Conclusions: Altogether, our first observations demonstrate that monitoring NEAT1and IFI27 expression in dengue patients could be useful in understanding dengue virus-induced disease progression and may be involved in pathophysiological processes.Keywords: dengue, lncRNA, NEAT1, transcriptome
Procedia PDF Downloads 3101491 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification
Authors: Hung-Sheng Lin, Cheng-Hsuan Li
Abstract:
Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction
Procedia PDF Downloads 3441490 Mathematical Model to Quantify the Phenomenon of Democracy
Authors: Mechlouch Ridha Fethi
Abstract:
This paper presents a recent mathematical model in political sciences concerning democracy. The model is represented by a logarithmic equation linking the Relative Index of Democracy (RID) to Participation Ratio (PR). Firstly the meanings of the different parameters of the model were presented; and the variation curve of the RID according to PR with different critical areas was discussed. Secondly, the model was applied to a virtual group where we show that the model can be applied depending on the gender. Thirdly, it was observed that the model can be extended to different language models of democracy and that little use to assess the state of democracy for some International organizations like UNO.Keywords: democracy, mathematic, modelization, quantification
Procedia PDF Downloads 3681489 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator
Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić
Abstract:
Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.Keywords: CT simulator, radiotherapy, quality control, QA programme
Procedia PDF Downloads 5321488 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy
Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu
Abstract:
The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis
Procedia PDF Downloads 651487 Using Artificial Neural Networks for Optical Imaging of Fluorescent Biomarkers
Authors: K. A. Laptinskiy, S. A. Burikov, A. M. Vervald, S. A. Dolenko, T. A. Dolenko
Abstract:
The article presents the results of the application of artificial neural networks to separate the fluorescent contribution of nanodiamonds used as biomarkers, adsorbents and carriers of drugs in biomedicine, from a fluorescent background of own biological fluorophores. The principal possibility of solving this problem is shown. Use of neural network architecture let to detect fluorescence of nanodiamonds against the background autofluorescence of egg white with high accuracy - better than 3 ug/ml.Keywords: artificial neural networks, fluorescence, data aggregation, biomarkers
Procedia PDF Downloads 7101486 Multidimensional Poverty and Its Correlates among Rural Households in Limpopo Province, South Africa
Authors: Tamunotonye Mayowa Braide, Isaac Oluwatayo
Abstract:
This study investigates multidimensional poverty, and its correlates among rural households in Sekhukhune and Capricorn District municipalities (SDM & CDM) in Limpopo Province, South Africa. Primary data were collected from 407 rural households selected through purposive and simple random sampling techniques. Analytical techniques employed include descriptive statistics, principal component analysis (PCA), and the Alkire Foster (A-F) methodology. The results of the descriptive statistics showed there are more females (66%) than males (34%) in rural areas of Limpopo Province, with about 45% of them having secondary school education as the highest educational level attained and only about 3% do not have formal education. In the analysis of deprivation, eight dimensions of deprivation, constructed from 21 variables, were identified using the PCA. These dimensions include type and condition of dwelling water and sanitation, educational attainment and income, type of fuel for cooking and heating, access to clothing and cell phone, assets and fuel for light, health condition, crowding, and child health. In identifying the poor with poverty cut-off (0.13) of all indicators, about 75.9% of the rural households are deprived in 25% of the total dimensions, with the adjusted headcount ratio (M0) being 0.19. Multidimensional poverty estimates showed higher estimates of poor rural households with 71%, compared to 29%, which fall below the income poverty line. The study conducted poverty decomposition, using sub-groups within the area by examining regions and household characteristics. In SDM, there are more multidimensionally poor households than in CDM. The water and sanitation dimension is the largest contributor to the multidimensional poverty index (MPI) in rural areas of Limpopo Province. The findings can, therefore, assist in better design of welfare policy and target poverty alleviation programs and as well help in efficient resource allocation at the provincial and local municipality levels.Keywords: Alkire-Foster methodology, Limpopo province, multidimensional poverty, principal component analysis, South Africa
Procedia PDF Downloads 1641485 Urinary Exosome miR-30c-5p as a Biomarker for Early-Stage Clear Cell Renal Cell Carcinoma
Authors: Shangqing Song, Bin Xu, Yajun Cheng, Zhong Wang
Abstract:
miRNAs derived from exosomes exist in a body fluid such as urine were regarded as potential biomarkers for various human cancers diagnosis and prognosis, as mature miRNAs can be steadily preserved by exosomes. However, its potential value in clear cell renal cell carcinoma (ccRCC) diagnosis and prognosis remains unclear. In the present study, differentially expressed miRNAs from urinal exosomes were identified by next-generation sequencing (NGS) technology. The 16 differentially expressed miRNAs were identified between ccRCC patients and healthy donors. To explore the specific diagnosis biomarker of ccRCC, we validated these urinary exosomes from 70 early-stage renal cancer patients, 30 healthy people and other urinary system cancers, including 30 early-stage prostate cancer patients and 30 early-stage bladder cancer patients by qRT-PCR. The results showed that urinary exosome miR-30c-5p could be stably amplified and meanwhile the expression of miR-30c-5p has no significant difference between other urinary system cancers and healthy control, however, expression level of miR-30c-5p in urinary exosomal of ccRCC patients was lower than healthy people and receiver operation characterization (ROC) curve showed that the area under the curve (AUC) values was 0.8192 (95% confidence interval was 0.7388-0.8996, P= 0.0000). In addition, up-regulating miR-30c-5p expression could inhibit renal cell carcinoma cells growth. Lastly, HSP5A was found as a direct target gene of miR-30c-5p. HSP5A depletion reversed the promoting effect of ccRCC growth casued by miR-30c-5p inhibitor, respectively. In conclusion, this study demonstrated that urinary exosomal miR-30c-5p is readily accessible as diagnosis biomarker of early-stage ccRCC, and miR-30c-5p might modulate the expression of HSPA5, which correlated with the progression of ccRCC.Keywords: clear cell renal cell carcinoma, exosome, HSP5A, miR-30c-5p
Procedia PDF Downloads 2671484 Evaluation of Hepatic Metabolite Changes for Differentiation Between Non-Alcoholic Steatohepatitis and Simple Hepatic Steatosis Using Long Echo-Time Proton Magnetic Resonance Spectroscopy
Authors: Tae-Hoon Kim, Kwon-Ha Yoon, Hong Young Jun, Ki-Jong Kim, Young Hwan Lee, Myeung Su Lee, Keum Ha Choi, Ki Jung Yun, Eun Young Cho, Yong-Yeon Jeong, Chung-Hwan Jun
Abstract:
Purpose: To assess the changes of hepatic metabolite for differentiation between non-alcoholic steatohepatitis (NASH) and simple steatosis on proton magnetic resonance spectroscopy (1H-MRS) in both humans and animal model. Methods: The local institutional review board approved this study and subjects gave written informed consent. 1H-MRS measurements were performed on a localized voxel of the liver using a point-resolved spectroscopy (PRESS) sequence and hepatic metabolites of alanine (Ala), lactate/triglyceride (Lac/TG), and TG were analyzed in NASH, simple steatosis and control groups. The group difference was tested with the ANOVA and Tukey’s post-hoc tests, and diagnostic accuracy was tested by calculating the area under the receiver operating characteristics (ROC) curve. The associations between metabolic concentration and pathologic grades or non-alcoholic fatty liver disease(NAFLD) activity scores were assessed by the Pearson’s correlation. Results: Patient with NASH showed the elevated Ala(p<0.001), Lac/TG(p < 0.001), TG(p < 0.05) concentration when compared with patients who had simple steatosis and healthy controls. The NASH patients were higher levels in Ala(mean±SEM, 52.5±8.3 vs 2.0±0.9; p < 0.001), Lac/TG(824.0±168.2 vs 394.1±89.8; p < 0.05) than simple steatosis. The area under the ROC curve to distinguish NASH from simple steatosis was 1.00 (95% confidence interval; 1.00, 1.00) with Ala and 0.782 (95% confidence interval; 0.61, 0.96) with Lac/TG. The Ala and Lac/TG levels were well correlated with steatosis grade, lobular inflammation, and NAFLD activity scores. The metabolic changes in human were reproducible to a mice model induced by streptozotocin injection and a high-fat diet. Conclusion: 1H-MRS would be useful for differentiation of patients with NASH and simple hepatic steatosis.Keywords: non-alcoholic fatty liver disease, non-alcoholic steatohepatitis, 1H MR spectroscopy, hepatic metabolites
Procedia PDF Downloads 3251483 Exploratory Factor Analysis of Natural Disaster Preparedness Awareness of Thai Citizens
Authors: Chaiyaset Promsri
Abstract:
Based on the synthesis of related literatures, this research found thirteen related dimensions that involved the development of natural disaster preparedness awareness including hazard knowledge, hazard attitude, training for disaster preparedness, rehearsal and practice for disaster preparedness, cultural development for preparedness, public relations and communication, storytelling, disaster awareness game, simulation, past experience to natural disaster, information sharing with family members, and commitment to the community (time of living). The 40-item of natural disaster preparedness awareness questionnaire was developed based on these thirteen dimensions. Data were collected from 595 participants in Bangkok metropolitan and vicinity. Cronbach's alpha was used to examine the internal consistency for this instrument. Reliability coefficient was 97, which was highly acceptable. Exploratory Factor Analysis where principal axis factor analysis was employed. The Kaiser-Meyer-Olkin index of sampling adequacy was .973, indicating that the data represented a homogeneous collection of variables suitable for factor analysis. Bartlett's test of Sphericity was significant for the sample as Chi-Square = 23168.657, df = 780, and p-value < .0001, which indicated that the set of correlations in the correlation matrix was significantly different and acceptable for utilizing EFA. Factor extraction was done to determine the number of factors by using principal component analysis and varimax. The result revealed that four factors had Eigen value greater than 1 with more than 60% cumulative of variance. Factor #1 had Eigen value of 22.270, and factor loadings ranged from 0.626-0.760. This factor was named as "Knowledge and Attitude of Natural Disaster Preparedness". Factor #2 had Eigen value of 2.491, and factor loadings ranged from 0.596-0.696. This factor was named as "Training and Development". Factor #3 had Eigen value of 1.821, and factor loadings ranged from 0.643-0.777. This factor was named as "Building Experiences about Disaster Preparedness". Factor #4 had Eigen value of 1.365, and factor loadings ranged from 0.657-0.760. This was named as "Family and Community". The results of this study provided support for the reliability and construct validity of natural disaster preparedness awareness for utilizing with populations similar to sample employed.Keywords: natural disaster, disaster preparedness, disaster awareness, Thai citizens
Procedia PDF Downloads 3781482 Aerodynamic Analysis of Vehicles
Authors: E. T. L. Cöuras Ford, V. A. C. Vale, J. U. L. Mendes
Abstract:
Two of the objective principal in the study of the aerodynamics of vehicles are the safety and the acting. Those objectives can be reached through the development of devices modify the drainage of air about of the vehicle and also through alterations in the way of the external surfaces. The front lowest profile of the vehicle, for instance, has great influence on the coefficient of aerodynamic penetration (Cx) and later on great part of the pressure distribution along the surface of the vehicle. The objective of this work was of analyzing the aerodynamic behavior that it happens on some types the trucks of vehicles, based on experimentation in aerodynamic tunnel, seeking to determine the aerodynamic efficiency of each one of them.Keywords: aerodynamic, vehicles, wind tunnel, safety, acting
Procedia PDF Downloads 4991481 Architectural Framework to Preserve Information of Cardiac Valve Control
Authors: Lucia Carrion Gordon, Jaime Santiago Sanchez Reinoso
Abstract:
According to the relation of Digital Preservation and the Health field as a case of study, the architectural model help us to explain that definitions. .The principal goal of Data Preservation is to keep information for a long term. Regarding of Mediacal information, in order to perform a heart transplant, physicians need to preserve this organ in an adequate way. This approach between the two perspectives, the medical and the technological allow checking the similarities about the concepts of preservation. Digital preservation and medical advances are related in the same level as knowledge improvement.Keywords: medical management, digital, data, heritage, preservation
Procedia PDF Downloads 4201480 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 2971479 Biases in Numerically Invariant Joint Signatures
Authors: Reza Aghayan
Abstract:
This paper illustrates that numerically invariant joint signatures suffer biases in the resulting signatures. Next, we classify the arising biases as Bias Type 1 and Bias Type 2 and show how they can be removed.Keywords: Euclidean and affine geometries, differential invariant signature curves, numerically invariant joint signatures, numerical analysis, numerical bias, curve analysis
Procedia PDF Downloads 5971478 The Hierarchical Model of Fitness Services Quality Perception in Serbia
Authors: Mirjana Ilic, Dragan Zivotic, Aleksandra Perovic, Predrag Gavrilovic
Abstract:
The service quality perception depends on many factors, such as the area in which the services are provided, socioeconomic status, educational status, experience, age and gender of consumers, as well as many others. For this reason, it is not possible to apply instrument for establishing the service quality perception that is developed in other areas and in other populations. The aim of the research was to form an instrument for assessing the quality perception in the field of fitness in Serbia. After analyzing the available literature and conducting a pilot research, there were 15 isolated areas in which it was possible to observe the service quality perception. The areas included: material and technical basis, secondary facilities, coaches, programs, reliability, credibility, security, rapid response, compassion, communication, prices, satisfaction, loyalty, quality outcomes and motives. These areas were covered by a questionnaire consisted of 100 items where the number of items varied from area to area from 3 up to 11. The questionnaire was administered to 350 subjects of both genders (174 men and 176 women) aged from 18 to 68 years, being beneficiaries of fitness services for at least 1 year. In each of the areas was conducted a factor analysis in its exploratory form by principal components method. The number of significant factors has been determined in accordance with the Kaiser Guttman criterion. The initial factor solutions were simplified using the Varimax rotation. Analyses per areas have produced from 1 to 4 factors. Afterward, the factor analysis of factor scores on the first principal component of each of the respondents in each of the analyzed area was performed, and the factor structure was obtained with four latent dimensions interpreted as offer, the relationship with the coaches, the experience of quality and the initial impression. This factor structure was analysed by hierarchical analysis of Oblique factors, which in the second order space produced single factor interpreted as a general factor of the service quality perception. The resulting questionnaire represents an instrument which can serve managers in the field of fitness to optimize the centers development, raising the quality of services in line with consumers needs and expectations.Keywords: fitness, hierarchical model, quality perception, factor analysis
Procedia PDF Downloads 3111477 Canada's "Flattened Curve": A Geospatial Temporal Analysis of Canada's Amelioration of the Sars-COV-2 Pandemic Through Coordinated Government Intervention
Authors: John Ahluwalia
Abstract:
As an affluent first-world nation, Canada took swift and comprehensive action during the outbreak of the SARS-CoV-2 (COVID-19) pandemic compared to other countries in the same socio-economic cohort. The United States has stumbled to overcome obstacles most developed nations have faced, which has led to significantly more per capita cases and deaths. The initial outbreaks of COVID-19 occurred in the US and Canada within days of each other and posed similar potentially catastrophic threats to public health, the economy, and governmental stability. On a macro level, events that take place in the US have a direct impact on Canada. For example, both countries tend to enter and exit economic recessions at approximately the same time, they are each other’s largest trading partners, and their currencies are inexorably linked. Why is it that Canada has not shared the same fate as the US (and many other nations) that have realized much worse outcomes relative to the COVID-19 pandemic? Variables intrinsic to Canada’s national infrastructure have been instrumental in the country’s efforts to flatten the curve of COVID-19 cases and deaths. Canada’s coordinated multi-level governmental effort has allowed it to create and enforce policies related to COVID-19 at both the national and provincial levels. Canada’s policy of universal healthcare is another variable. Health care and public health measures are enforced on a provincial level, and it is within each province’s jurisdiction to dictate standards for public safety based on scientific evidence. Rather than introducing confusion and the possibility of competition for resources such as PPE and vaccines, Canada’s multi-level chain of government authority has provided consistent policies supporting national public health and local delivery of medical care. This paper will demonstrate that the coordinated efforts on provincial and federal levels have been the linchpin in Canada’s relative success in containing the deadly spread of the COVID-19 virus.Keywords: COVID-19, Canada, GIS, temporal analysis, ESRI
Procedia PDF Downloads 1471476 Use of Multivariate Statistical Techniques for Water Quality Monitoring Network Assessment, Case of Study: Jequetepeque River Basin
Authors: Jose Flores, Nadia Gamboa
Abstract:
A proper water quality management requires the establishment of a monitoring network. Therefore, evaluation of the efficiency of water quality monitoring networks is needed to ensure high-quality data collection of critical quality chemical parameters. Unfortunately, in some Latin American countries water quality monitoring programs are not sustainable in terms of recording historical data or environmentally representative sites wasting time, money and valuable information. In this study, multivariate statistical techniques, such as principal components analysis (PCA) and hierarchical cluster analysis (HCA), are applied for identifying the most significant monitoring sites as well as critical water quality parameters in the monitoring network of the Jequetepeque River basin, in northern Peru. The Jequetepeque River basin, like others in Peru, shows socio-environmental conflicts due to economical activities developed in this area. Water pollution by trace elements in the upper part of the basin is mainly related with mining activity, and agricultural land lost due to salinization is caused by the extensive use of groundwater in the lower part of the basin. Since the 1980s, the water quality in the basin has been non-continuously assessed by public and private organizations, and recently the National Water Authority had established permanent water quality networks in 45 basins in Peru. Despite many countries use multivariate statistical techniques for assessing water quality monitoring networks, those instruments have never been applied for that purpose in Peru. For this reason, the main contribution of this study is to demonstrate that application of the multivariate statistical techniques could serve as an instrument that allows the optimization of monitoring networks using least number of monitoring sites as well as the most significant water quality parameters, which would reduce costs concerns and improve the water quality management in Peru. Main socio-economical activities developed and the principal stakeholders related to the water management in the basin are also identified. Finally, water quality management programs will also be discussed in terms of their efficiency and sustainability.Keywords: PCA, HCA, Jequetepeque, multivariate statistical
Procedia PDF Downloads 3551475 Coffee Consumption and Glucose Metabolism: a Systematic Review of Clinical Trials
Authors: Caio E. G. Reis, Jose G. Dórea, Teresa H. M. da Costa
Abstract:
Objective: Epidemiological data shows an inverse association of coffee consumption with risk of type 2 diabetes mellitus. However, the clinical effects of coffee consumption on the glucose metabolism biomarkers remain controversial. Thus, this paper reviews clinical trials that evaluated the effects of coffee consumption on glucose metabolism. Research Design and Methods: We identified studies published until December 2014 by searching electronic databases and reference lists. We included randomized clinical trials which the intervention group received caffeinated and/or decaffeinated coffee and the control group received water or placebo treatments and measured biomarkers of glucose metabolism. The Jadad Score was applied to evaluate the quality of the studies whereas studies that scored ≥ 3 points were considered for the analyses. Results: Seven clinical trials (total of 237 subjects) were analyzed involving adult healthy, overweight and diabetic subjects. The studies were divided in short-term (1 to 3h) and long-term (2 to 16 weeks) duration. The results for short-term studies showed that caffeinated coffee consumption may increase the area under the curve for glucose response, while for long-term studies caffeinated coffee may improve the glycemic metabolism by reducing the glucose curve and increasing insulin response. These results seem to show that the benefits of coffee consumption occur in the long-term as has been shown in the reduction of type 2 diabetes mellitus risk in epidemiological studies. Nevertheless, until the relationship between long-term coffee consumption and type 2 diabetes mellitus is better understood and any mechanism involved identified, it is premature to make claims about coffee preventing type 2 diabetes mellitus. Conclusion: The findings suggest that caffeinated coffee may impairs glucose metabolism in short-term but in the long-term the studies indicate reduction of type 2 diabetes mellitus risk. More clinical trials with comparable methodology are needed to unravel this paradox.Keywords: coffee, diabetes mellitus type 2, glucose, insulin
Procedia PDF Downloads 4661474 A Single-Channel BSS-Based Method for Structural Health Monitoring of Civil Infrastructure under Environmental Variations
Authors: Yanjie Zhu, André Jesus, Irwanda Laory
Abstract:
Structural Health Monitoring (SHM), involving data acquisition, data interpretation and decision-making system aim to continuously monitor the structural performance of civil infrastructures under various in-service circumstances. The main value and purpose of SHM is identifying damages through data interpretation system. Research on SHM has been expanded in the last decades and a large volume of data is recorded every day owing to the dramatic development in sensor techniques and certain progress in signal processing techniques. However, efficient and reliable data interpretation for damage detection under environmental variations is still a big challenge. Structural damages might be masked because variations in measured data can be the result of environmental variations. This research reports a novel method based on single-channel Blind Signal Separation (BSS), which extracts environmental effects from measured data directly without any prior knowledge of the structure loading and environmental conditions. Despite the successful application in audio processing and bio-medical research fields, BSS has never been used to detect damage under varying environmental conditions. This proposed method optimizes and combines Ensemble Empirical Mode Decomposition (EEMD), Principal Component Analysis (PCA) and Independent Component Analysis (ICA) together to separate structural responses due to different loading conditions respectively from a single channel input signal. The ICA is applying on dimension-reduced output of EEMD. Numerical simulation of a truss bridge, inspired from New Joban Line Arakawa Railway Bridge, is used to validate this method. All results demonstrate that the single-channel BSS-based method can recover temperature effects from mixed structural response recorded by a single sensor with a convincing accuracy. This will be the foundation of further research on direct damage detection under varying environment.Keywords: damage detection, ensemble empirical mode decomposition (EEMD), environmental variations, independent component analysis (ICA), principal component analysis (PCA), structural health monitoring (SHM)
Procedia PDF Downloads 3041473 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that can use the large amount and variety of data generated during healthcare services every day; one of the significant advantages of these new technologies is the ability to get experience and knowledge from real-world use and to improve their performance continuously. Healthcare systems and institutions can significantly benefit because the use of advanced technologies improves the efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and protect patients' safety. The evolution and the continuous improvement of software used in healthcare must consider the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device's approval. Still, they are necessary to ensure performance, quality, and safety. At the same time, they can be a business opportunity if the manufacturer can define the appropriate regulatory strategy in advance. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems
Procedia PDF Downloads 881472 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure
Authors: Esra Zengin, Sinan Akkar
Abstract:
Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.Keywords: ground motion selection, scaling, uncertainty, fragility curve
Procedia PDF Downloads 5831471 Application of Raman Spectroscopy for Ovarian Cancer Detection: Comparative Analysis of Fresh, Formalin-Fixed, and Paraffin-Embedded Samples
Authors: Zeinab Farhat, Nicolas Errien, Romuald Wernert, Véronique Verriele, Frédéric Amiard, Philippe Daniel
Abstract:
Ovarian cancer, also known as the silent killer, is the fifth most common cancer among women worldwide, and its death rate is higher than that of other gynecological cancers. The low survival rate of women with high-grade serous ovarian carcinoma highlights the critical need for the development of new methods for early detection and diagnosis of the disease. The aim of this study was to evaluate if Raman spectroscopy combined with chemometric methods such as Principal Component Analysis (PCA) could differentiate between cancerous and normal tissues from different types of samples, such as paraffin embedding, chemical deparaffinized, formalin-fixed and fresh samples of the same normal and malignant ovarian tissue. The method was applied specifically to two critical spectral regions: the signature region (860-1000 〖cm〗^(-1)) and the high-frequency region (2800-3100 〖cm〗^(-1) ). The mean spectra of paraffin-embedded in normal and malignant tissues showed almost similar intensity. On the other hand, the mean spectra of normal and cancer tissues from chemical deparaffinized, formalin-fixed, and fresh samples show significant intensity differences. These spectral differences reflect variations in the molecular composition of the tissues, particularly lipids and proteins. PCA, which was applied to distinguish between cancer and normal tissues, was performed on whole spectra and on selected regions—the PCA score plot of paraffin-embedded shows considerable overlap between the two groups. However, the PCA score of chemicals deparaffinized, formalin-fixed, and fresh samples showed a good discrimination of tissue types. Our findings were validated by analyses of a set of samples whose status (normal and cancerous) was not previously known. The results of this study suggest that Raman Spectroscopy associated with PCA methods has the capacity to provide clinically significant differentiation between normal and cancerous ovarian tissues.Keywords: Raman spectroscopy, ovarian cancer, signal processing, Principal Component Analysis, classification
Procedia PDF Downloads 251470 Evaluation of the Antibacterial Effects of Turmeric Oleoresin, Capsicum Oleoresin and Garlic Essential Oil against Salmonella enterica Typhimurium
Authors: Jun Hyung Lee, Robin B. Guevarra, Jin Ho Cho, Bo-Ra Kim, Jiwon Shin, Doo Wan Kim, Young Hwa Kim, Minho Song, Hyeun Bum Kim
Abstract:
Salmonella is one of the most important swine pathogens, causing acute or chronic digestive diseases, such as enteritis. The acute form of enteritis is common in young pigs of 2-4 months of age. Salmonellosis in swine causes a huge economic burden to swine industry by reducing production. Therefore, it is necessary that swine industries should strive to decrease Salmonellosis in pigs in order to reduce economic losses. Thus, we tested three types of natural plant extracts(PEs) to evaluate antibacterial effects against Salmonella enterica Typhimurium isolated from the piglet with Salmonellosis. Three PEs including turmeric oleoresin (containing curcumin 79 to 85%), capsicum oleoresin (containing capsaicin 40%-40.1%), and garlic essential oil (100% natural garlic) were tested using the direct contact agar diffusion test, minimum inhibitory concentration test, growth curve assay, and heat stability test. The tests were conducted with PEs at each concentration of 2.5%, 5%, and 10%. For the heat stability test, PEs with 10% concentration were incubated at each 4, 20, 40, 60, 80, and 100 °C for 1 hour; then the direct contact agar diffusion test was used. For the positive and negative controls, 0.5N HCl and 1XPBS were used. All the experiments were duplicated. In the direct contact agar diffusion test, garlic essential oil with 2.5%, 5%, and 10% concentration showed inhibit zones of 1.5cm, 2.7cm, and 2.8cm diameters compared to that of 3.5cm diameter for 0.5N HCl. The minimum inhibited concentration of garlic essential oil was 2.5%. Growth curve assay showed that the garlic essential oil was able to inhibit Salmonella growth significantly after 4hours. The garlic essential oil retained the ability to inhibit Salmonella growth after heat treatment at each temperature. However, turmeric and capsicum oleoresins were not able to significantly inhibit Salmonella growth by all the tests. Even though further in-vivo tests will be needed to verify effects of garlic essential oil for the Salmonellosis prevention for piglets, our results showed that the garlic essential oil could be used as a potential natural agent to prevent Salmonellosis in swine.Keywords: garlic essential oil, pig, salmonellosis, Salmonella enterica
Procedia PDF Downloads 1731469 Impact of the Action Antropic in the Desertification of Steppe in Algeria
Authors: Kadi-Hanifi Halima
Abstract:
Stipa tenacissima is a plant with a big ecological value (against desertification) and economical stake (paper industry). It is important by its pastoral value due to the inflorescence. It occupied large areas between the Tellian atlas and the Saharian atlas, at the present, these areas of alfa have regressed a lot. This regression is estimated at 1% per year. The principal cause is a human responsibility. The drought is just an aggravating circumstance. The eradication of such a kind of species will have serious consequences upon the equilibrium of all the steppic ecosystem. Thus, we have thought necessary and urgent to know the alfa ecosystem, under all its aspects (climatic, floristic, and edaphic), this diagnostic could direct the fight actions against desertificationKeywords: desertification, anthropic action, soils, Stipa tenacissima
Procedia PDF Downloads 3121468 Evaluation of the Antibacterial Effects of Turmeric Oleoresin, Capsicum Oleoresin and Garlic Essential Oil against Shiga Toxin-Producing Escherichia coli
Authors: Jun Hyung Lee, Robin B. Guevarra, Jin Ho Cho, Bo-Ra Kim, Jiwon Shin, Doo Wan Kim, Young Hwa Kim, Minho Song, Hyeun Bum Kim
Abstract:
Colibacillosis is one of the major health problems in young piglets ultimately resulting in their death, and it is common especially in young piglets. For the swine industry, colibacillosis is one of the important economic burdens. Therefore, it is necessary for the swine industries to prevent Colibacillosis in piglets in order to reduce economic losses. Thus, we tested three types of natural plant extracts (PEs) to evaluate antibacterial effects against Shiga toxin-producing Escherichia coli (STEC) isolated from the piglet. Three PEs including turmeric oleoresin (containing curcumin 79 to 85%), capsicum oleoresin (containing capsaicin 40%-40.1%), and garlic essential oil (100% natural garlic) were tested using the direct contact agar diffusion test, minimum inhibitory concentration test, growth curve assay, and heat stability test. The tests were conducted with PEs at each concentration of 2.5%, 5%, and 10%. For the heat stability test, PEs with 10% concentration were incubated at each 4, 20, 40, 60, 80, and 100 °C for 1 hour, then the direct contact agar diffusion test was used. For the positive and negative controls, 0.5N HCl and 1XPBS were used. All the experiments were duplicated. In the direct contact agar diffusion test, garlic essential oil with 2.5%, 5%, and 10% concentration showed inhibit zones of 1.1cm, 3.0cm, and 3.6 cm in diameters compared to that of 3.5cm diameter for 0.5N HCl. The minimum inhibited concentration of garlic essential oil was 2.5%. Growth curve assay showed that the garlic essential oil was able to inhibit STEC growth significantly after 4 hours. The garlic essential oil retained the ability to inhibit STEC growth after heat treatment at each temperature. However, turmeric and capsicum oleoresins were not able to significantly inhibit STEC growth by all the tests. Even though further tests using the piglets will be required to evaluate effects of garlic essential oil for the Colibacillosis prevention for piglets, our results showed that the garlic essential oil could be used as a potential natural agent to prevent Colibacillosis in swine.Keywords: garlic essential oil, pig, Colibacillosis, Escherichia coli
Procedia PDF Downloads 2581467 The Factors of Supply Chain Collaboration
Authors: Ghada Soltane
Abstract:
The objective of this study was to identify factors impacting supply chain collaboration. a quantitative study was carried out on a sample of 84 Tunisian industrial companies. To verify the research hypotheses and test the direct effect of these factors on supply chain collaboration a multiple regression method was used using SPSS 26 software. The results show that there are four factors direct effects that affect supply chain collaboration in a meaningful and positive way, including: trust, engagement, information sharing and information qualityKeywords: supply chain collaboration, factors of collaboration, principal component analysis, multiple regression
Procedia PDF Downloads 491466 Identifying the Determinants of the Shariah Non-Compliance Risk via Principal Axis Factoring
Authors: Muhammad Arzim Naim, Saiful Azhar Rosly, Mohamad Sahari Nordin
Abstract:
The objective of this study is to investigate the factors affecting the rise of Shariah non-compliance risk that can bring Islamic banks to succumb to monetary loss. Prior literatures have never analyzed such risk in details despite lots of it arguing on the validity of some Shariah compliance products. The Shariah non-compliance risk in this context is looking to the potentially failure of the facility to stand from the court test say that if the banks bring it to the court for compensation from the defaulted clients. The risk may also arise if the customers refuse to make the financing payments on the grounds of the validity of the contracts, for example, when relinquishing critical requirement of Islamic contract such as ownership, the risk that may lead the banks to suffer loss when the customer invalidate the contract through the court. The impact of Shariah non-compliance risk to Islamic banks is similar to that of legal risks faced by the conventional banks. Both resulted into monetary losses to the banks respectively. In conventional banking environment, losses can be in the forms of summons paid to the customers if they won the case. In banking environment, this normally can be in very huge amount. However, it is right to mention that for Islamic banks, the subsequent impact to them can be rigorously big because it will affect their reputation. If the customers do not perceive them to be Shariah compliant, they will take their money and bank it in other places. This paper provides new insights of risks faced by credit intensive Islamic banks by providing a new extension of knowledge with regards to the Shariah non-compliance risk by identifying its individual components that directly affecting the risk together with empirical evidences. Not limited to the Islamic banking fraternities, the regulators and policy makers should be able to use findings in this paper to evaluate the components of the Shariah non-compliance risk and make the necessary actions. The paper is written based on Malaysia’s Islamic banking practices which may not directly related to other jurisdictions. Even though the focuses of this study is directly towards to the Bay Bithaman Ajil or popularly known as BBA (i.e. sale with deferred payments) financing modality, the result from this study may be applicable to other Islamic financing vehicles.Keywords: Islamic banking, Islamic finance, Shariah Non-compliance risk, Bay Bithaman Ajil (BBA), principal axis factoring
Procedia PDF Downloads 3011465 Three-Dimensional Numerical Analysis of the Harmfulness of Defects in Oil Pipes
Authors: B. Medjadji, L. Aminallah, B. Serier, M. Benlebna
Abstract:
In this study, the finite element method in 3-D is used to calculate the integral J in the semi-elliptical crack in a pipe subjected to internal pressure. The stress-strain curve of the pipe has been determined experimentally. The J-integral was calculated in two fronts crack (Ф = 0 and Ф = π/2). The effect of the configuration of the crack on the J integral is analysed. The results show that an external longitudinal crack in a pipe is the most dangerous. It also shows that the increase in the applied pressure causes a remarkable increase of the integral J. The effect of the depth of the crack becomes important when the ratio between the depth of the crack and the thickness of the pipe (a / t) tends to 1.Keywords: J integral, pipeline, crack, MEF
Procedia PDF Downloads 4091464 Observation of Critical Sliding Velocity
Authors: Visar Baxhuku, Halil Demolli, Alishukri Shkodra
Abstract:
This paper presents the monitoring of vehicle movement, namely the developing of speed of vehicles during movement in a certain twist. The basic geometry data of twist are measured with the purpose of calculating the slide in border speed. During the research, measuring developed speed of passenger vehicles for the real conditions of the road surface, dry road with average damage, was realised. After setting values, the analysis was done in function security of movement in twist.Keywords: critical sliding velocity, moving velocity, curve, passenger vehicles
Procedia PDF Downloads 420