Search results for: multivariate adaptive regression splines pulmonary function test
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16869

Search results for: multivariate adaptive regression splines pulmonary function test

16479 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage

Authors: Andrew Laming, John Hattie, Mark Wilson

Abstract:

Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.  

Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean

Procedia PDF Downloads 39
16478 Complete Ensemble Empirical Mode Decomposition with Adaptive Noise Temporal Convolutional Network for Remaining Useful Life Prediction of Lithium Ion Batteries

Authors: Jing Zhao, Dayong Liu, Shihao Wang, Xinghua Zhu, Delong Li

Abstract:

Uhumanned Underwater Vehicles generally operate in the deep sea, which has its own unique working conditions. Lithium-ion power batteries should have the necessary stability and endurance for use as an underwater vehicle’s power source. Therefore, it is essential to accurately forecast how long lithium-ion batteries will last in order to maintain the system’s reliability and safety. In order to model and forecast lithium battery Remaining Useful Life (RUL), this research suggests a model based on Complete Ensemble Empirical Mode Decomposition with Adaptive noise-Temporal Convolutional Net (CEEMDAN-TCN). In this study, two datasets, NASA and CALCE, which have a specific gap in capacity data fluctuation, are used to verify the model and examine the experimental results in order to demonstrate the generalizability of the concept. The experiments demonstrate the network structure’s strong universality and ability to achieve good fitting outcomes on the test set for various battery dataset types. The evaluation metrics reveal that the CEEMDAN-TCN prediction performance of TCN is 25% to 35% better than that of a single neural network, proving that feature expansion and modal decomposition can both enhance the model’s generalizability and be extremely useful in industrial settings.

Keywords: lithium-ion battery, remaining useful life, complete EEMD with adaptive noise, temporal convolutional net

Procedia PDF Downloads 117
16477 Matrix Completion with Heterogeneous Cost

Authors: Ilqar Ramazanli

Abstract:

The matrix completion problem has been studied broadly under many underlying conditions. The problem has been explored under adaptive or non-adaptive, exact or estimation, single-phase or multi-phase, and many other categories. In most of these cases, the observation cost of each entry is uniform and has the same cost across the columns. However, in many real-life scenarios, we could expect elements from distinct columns or distinct positions to have a different cost. In this paper, we explore this generalization under adaptive conditions. We approach the problem under two different cost models. The first one is that entries from different columns have different observation costs, but within the same column, each entry has a uniform cost. The second one is any two entry has different observation cost, despite being the same or different columns. We provide complexity analysis of our algorithms and provide tightness guarantees.

Keywords: matroid optimization, matrix completion, linear algebra, algorithms

Procedia PDF Downloads 77
16476 A Subband BSS Structure with Reduced Complexity and Fast Convergence

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin

Abstract:

A blind source separation method is proposed; in this method, we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work, the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each subband than the input signal at full bandwidth, and can promote better rates of convergence.

Keywords: blind source separation, computational complexity, subband, convergence speed, mixture

Procedia PDF Downloads 555
16475 The Relationship between Coping Styles and Internet Addiction among High School Students

Authors: Adil Kaval, Digdem Muge Siyez

Abstract:

With the negative effects of internet use in a person's life, the use of the Internet has become an issue. This subject was mostly considered as internet addiction, and it was investigated. In literature, it is noteworthy that some theoretical models have been proposed to explain the reasons for internet addiction. In addition to these theoretical models, it may be thought that the coping style for stressing events can be a predictor of internet addiction. It was aimed to test with logistic regression the effect of high school students' coping styles on internet addiction levels. Sample of the study consisted of 770 Turkish adolescents (471 girls, 299 boys) selected from high schools in the 2017-2018 academic year in İzmir province. Internet Addiction Test, Coping Scale for Child and Adolescents and a demographic information form were used in this study. The results of the logistic regression analysis indicated that the model of coping styles predicted internet addiction provides a statistically significant prediction of internet addiction. Gender does not predict whether or not to be addicted to the internet. The active coping style is not effective on internet addiction levels, while the avoiding and negative coping style are effective on internet addiction levels. With this model, % 79.1 of internet addiction in high school is estimated. The Negelkerke pseudo R2 indicated that the model accounted for %35 of the total variance. The results of this study on Turkish adolescents are similar to the results of other studies in the literature. It can be argued that avoiding and negative coping styles are important risk factors in the development of internet addiction.

Keywords: adolescents, coping, internet addiction, regression analysis

Procedia PDF Downloads 149
16474 Science of Social Work: Recognizing Its Existence as a Scientific Discipline by a Method Triangulation

Authors: Sandra Mendes

Abstract:

Social Work has encountered over time with multivariate requests in the field of its action, provisioning frameworks of knowledge and praxis. Over the years, we have observed a transformation of society and, consequently, of the public who deals with the social work practitioners. Both, training and profession have had need to adapt and readapt the ways of doing, bailing up theories to action, while action unfolds emancipation of new theories. The theoretical questioning of this subject lies on classical authors from social sciences, and contemporary authors of Social Work. In fact, both enhance, in the design of social work, an integration and social cohesion function, creating a culture of action and theory, attributing to its method a relevant function, which shall be promoter of social changes in various dimensions of both individual and collective life, as well as scientific knowledge. On the other hand, it is assumed that Social Work, through its professionalism and through the academy, is now closer to distinguish itself from other Social Sciences as an autonomous scientific field, being, however, in the center of power struggles. This paper seeks to fill the gap in social work literature about the study of the scientific field of this area of knowledge.

Keywords: field theory, knowledge, science, social work

Procedia PDF Downloads 323
16473 Current Applications of Artificial Intelligence (AI) in Chest Radiology

Authors: Angelis P. Barlampas

Abstract:

Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.

Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses

Procedia PDF Downloads 44
16472 South African Multiple Deprivation-Concentration Index Quantiles Differentiated by Components of Success and Impediment to Tuberculosis Control Programme Using Mathematical Modelling in Rural O. R. Tambo District Health Facilities

Authors: Ntandazo Dlatu, Benjamin Longo-Mbenza, Andre Renzaho, Ruffin Appalata, Yolande Yvonne Valeria Matoumona Mavoungou, Mbenza Ben Longo, Kenneth Ekoru, Blaise Makoso, Gedeon Longo Longo

Abstract:

Background: The gap between complexities related to the integration of Tuberculosis /HIV control and evidence-based knowledge motivated the initiation of the study. Therefore, the objective of this study was to explore correlations between national TB management guidelines, multiple deprivation indexes, quantiles, components and levels of Tuberculosis control programme using mathematical modeling in rural O.R. Tambo District Health Facilities, South Africa. Methods: The study design used mixed secondary data analysis and cross-sectional analysis between 2009 and 2013 across O.R Tambo District, Eastern Cape, South Africa using univariate/ bivariate analysis, linear multiple regression models, and multivariate discriminant analysis. Health inequalities indicators and component of an impediment to the tuberculosis control programme were evaluated. Results: In total, 62 400 records for TB notification were analyzed for the period 2009-2013. There was a significant but negative between Financial Year Expenditure (r= -0.894; P= 0.041) Seropositive HIV status(r= -0.979; P= 0.004), Population Density (r = -0.881; P= 0.048) and the number of TB defaulter in all TB cases. It was shown unsuccessful control of TB management program through correlations between numbers of new PTB smear positive, TB defaulter new smear-positive, TB failure all TB, Pulmonary Tuberculosis case finding index and deprivation-concentration-dispersion index. It was shown successful TB program control through significant and negative associations between declining numbers of death in co-infection of HIV and TB, TB deaths all TB and SMIAD gradient/ deprivation-concentration-dispersion index. The multivariate linear model was summarized by unadjusted r of 96%, adjusted R2 of 95 %, Standard Error of estimate of 0.110, R2 changed of 0.959 and significance for variance change for P=0.004 to explain the prediction of TB defaulter in all TB with equation y= 8.558-0.979 x number of HIV seropositive. After adjusting for confounding factors (PTB case finding the index, TB defaulter new smear-positive, TB death in all TB, TB defaulter all TB, and TB failure in all TB). The HIV and TB death, as well as new PTB smear positive, were identified as the most important, significant, and independent indicator to discriminate most deprived deprivation index far from other deprivation quintiles 2-5 using discriminant analysis. Conclusion: Elimination of poverty such as overcrowding, lack of sanitation and environment of highest burden of HIV might end the TB threat in O.R Tambo District, Eastern Cape, South Africa. Furthermore, ongoing adequate budget comprehensive, holistic and collaborative initiative towards Sustainable Developmental Goals (SDGs) is necessary for complete elimination of TB in poor O.R Tambo District.

Keywords: tuberculosis, HIV/AIDS, success, failure, control program, health inequalities, South Africa

Procedia PDF Downloads 138
16471 Monotone Rational Trigonometric Interpolation

Authors: Uzma Bashir, Jamaludin Md. Ali

Abstract:

This study is concerned with the visualization of monotone data using a piece-wise C1 rational trigonometric interpolating scheme. Four positive shape parameters are incorporated in the structure of rational trigonometric spline. Conditions on two of these parameters are derived to attain the monotonicity of monotone data and other two are left-free. Figures are used widely to exhibit that the proposed scheme produces graphically smooth monotone curves.

Keywords: trigonometric splines, monotone data, shape preserving, C1 monotone interpolant

Procedia PDF Downloads 246
16470 Semi-Automatic Segmentation of Mitochondria on Transmission Electron Microscopy Images Using Live-Wire and Surface Dragging Methods

Authors: Mahdieh Farzin Asanjan, Erkan Unal Mumcuoglu

Abstract:

Mitochondria are cytoplasmic organelles of the cell, which have a significant role in the variety of cellular metabolic functions. Mitochondria act as the power plants of the cell and are surrounded by two membranes. Significant morphological alterations are often due to changes in mitochondrial functions. A powerful technique in order to study the three-dimensional (3D) structure of mitochondria and its alterations in disease states is Electron microscope tomography. Detection of mitochondria in electron microscopy images due to the presence of various subcellular structures and imaging artifacts is a challenging problem. Another challenge is that each image typically contains more than one mitochondrion. Hand segmentation of mitochondria is tedious and time-consuming and also special knowledge about the mitochondria is needed. Fully automatic segmentation methods lead to over-segmentation and mitochondria are not segmented properly. Therefore, semi-automatic segmentation methods with minimum manual effort are required to edit the results of fully automatic segmentation methods. Here two editing tools were implemented by applying spline surface dragging and interactive live-wire segmentation tools. These editing tools were applied separately to the results of fully automatic segmentation. 3D extension of these tools was also studied and tested. Dice coefficients of 2D and 3D for surface dragging using splines were 0.93 and 0.92. This metric for 2D and 3D for live-wire method were 0.94 and 0.91 respectively. The root mean square symmetric surface distance values of 2D and 3D for surface dragging was measured as 0.69, 0.93. The same metrics for live-wire tool were 0.60 and 2.11. Comparing the results of these editing tools with the results of automatic segmentation method, it shows that these editing tools, led to better results and these results were more similar to ground truth image but the required time was higher than hand-segmentation time

Keywords: medical image segmentation, semi-automatic methods, transmission electron microscopy, surface dragging using splines, live-wire

Procedia PDF Downloads 142
16469 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR

Authors: Pascal Mwenge, Tumisang Seodigeng

Abstract:

The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.

Keywords: biodiesel, calibration, chemometrics, methanolysis, multivariate analysis, transesterification, FTIR

Procedia PDF Downloads 127
16468 Physical Function and Physical Activity Preferences of Elderly Individuals Admitted for Elective Abdominal Surgery: A Pilot Study.

Authors: Rozelle Labuschagne, Ronel Roos

Abstract:

Individuals often experience a reduction in physical function, quality of life and basic activities of daily living after surgery. This is exponentially true for high-risk patients, especially the elderly and frail individuals. Not much is known about the physical function, physical activity preferences and factors associated with the six-minute walk test of elderly individuals who would undergo elective abdominal surgery in South Africa. Such information is important to design effective prehabilitation physiotherapy programs prior to elective surgery. The purpose of the study was to describe the demographic profile and physical function of elderly patients who would undergo elective surgery and to determine factors associated with their six-minute walk test distance findings. A cross-sectional descriptive study in elderly patients older than 60 years of age who would undergo elective abdominal surgery were consecutively sampled at a private hospital in Pretoria, South Africa. Participants’ demographics were collected and physical function assessed with the Functional Comorbidity Index (FCI), DeMorton Mobility Index (DEMMI), Lawton-Brody Instrumental Activities of Daily Living Scale (IADL) and six-minute walk test (6MWT). Descriptive and inferential statistics were used for data analysis with IBM SPSS 25. A p-value ≤ 0.05 were deemed statistically significant. The pilot study consisted of 12 participants (female (n=11, 91.7%), male (n=1, 8.3%) with a mean age of 65.8 (±4.5) years, body mass index of 28 (±4.2) kg.m2 with one (8.3%) participant being a current smoker and four (33.3%) participants having a smoking history. Nine (75%) participants lived independently at home and three (25%) had caregivers. Participants reported walking (n=6, 50%), stretching exercises (n=1, 8.3%), household chores & gardening (n=2, 16.7%), biking/swimming/running (n=1, 8.3%) as physical activity preferences. Physical function findings of the sample were: mean FCI score 3 (±1.1), DEMMI score 81.1 (±14.9), IADL 95 (±17.3), 6MWT 435.50 (IQR 364.75-458.50) with percentage 6MWT distance achieved 81.8% (IQR 64.4%-87.5%). A strong negative correlation was observed between 6MWT distance walked and FCI (r = -0.729, p=0.007). The majority of study participants reported incorporating some form of physical activity into their daily life as form of exercise. Most participants did not achieve their predicted 6MWT distance indicating less than optimal levels of physical function capacity. The number of comorbidities as determined by the FCI was associated with the distance that participants could walk with the 6MWT. The results of this pilot study could be used to indicate which elderly individuals would benefit most from a pre-surgical rehabilitation program. The main goal of such a program would be to improve physical function capacity as measured by the 6MWT. Surgeons could refer patients based on age and number of comorbidities, as determined by the FCI, to potentially improve surgical outcomes.

Keywords: abdominal surgery, elderly, physical function, six-minute walk test

Procedia PDF Downloads 173
16467 Multivariate Assessment of Mathematics Test Scores of Students in Qatar

Authors: Ali Rashash Alzahrani, Elizabeth Stojanovski

Abstract:

Data on various aspects of education are collected at the institutional and government level regularly. In Australia, for example, students at various levels of schooling undertake examinations in numeracy and literacy as part of NAPLAN testing, enabling longitudinal assessment of such data as well as comparisons between schools and states within Australia. Another source of educational data collected internationally is via the PISA study which collects data from several countries when students are approximately 15 years of age and enables comparisons in the performance of science, mathematics and English between countries as well as ranking of countries based on performance in these standardised tests. As well as student and school outcomes based on the tests taken as part of the PISA study, there is a wealth of other data collected in the study including parental demographics data and data related to teaching strategies used by educators. Overall, an abundance of educational data is available which has the potential to be used to help improve educational attainment and teaching of content in order to improve learning outcomes. A multivariate assessment of such data enables multiple variables to be considered simultaneously and will be used in the present study to help develop profiles of students based on performance in mathematics using data obtained from the PISA study.

Keywords: cluster analysis, education, mathematics, profiles

Procedia PDF Downloads 103
16466 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 129
16465 Climate-Smart Agriculture Technologies and Determinants of Farmers’ Adoption Decisions in the Great Rift Valley of Ethiopia

Authors: Theodrose Sisay, Kindie Tesfaye, Mengistu Ketema, Nigussie Dechassa, Mezegebu Getnet

Abstract:

Agriculture is a sector that is very vulnerable to the effects of climate change and contributes to anthropogenic greenhouse gas (GHG) emissions in the atmosphere. By lowering emissions and adjusting to the change, it can also help to reduce climate change. Utilizing Climate-Smart Agriculture (CSA) technology that can sustainably boost productivity, improve resilience, and lower GHG emissions is crucial. This study sought to identify the CSA technologies used by farmers and assess adoption levels and factors that influence them. In order to gather information from 384 smallholder farmers in the Great Rift Valley (GRV) of Ethiopia, a cross-sectional survey was carried out. Data were analysed using percentage, chi-square test, t-test, and multivariate probit model. Results showed that crop diversification, agroforestry, and integrated soil fertility management were the most widely practiced technologies. The results of the Chi-square and t-tests showed that there are differences and significant and positive connections between adopters and non-adopters based on various attributes. The chi-square and t-test results confirmed that households who were older had higher incomes, greater credit access, knowledge of the climate, better training, better education, larger farms, higher incomes, and more frequent interactions with extension specialists had a positive and significant association with CSA technology adopters. The model result showed that age, sex, and education of the head, farmland size, livestock ownership, income, access to credit, climate information, training, and extension contact influenced the selection of CSA technologies. Therefore, effective action must be taken to remove barriers to the adoption of CSA technologies, and taking these adoption factors into account in policy and practice is anticipated to support smallholder farmers in adapting to climate change while lowering emissions.

Keywords: climate change, climate-smart agriculture, smallholder farmers, multivariate probit model

Procedia PDF Downloads 94
16464 Effect of Cardio-Specific Overexpression of MUL1, a Mitochondrial Protein on Myocardial Function

Authors: Ximena Calle, Plinio Cantero-López, Felipe Muñoz-Córdova, Mayarling-Francisca Troncoso, Sergio Lavandero, Valentina Parra

Abstract:

MUL1, a mitochondrial E3 ubiquitin ligase anchored to the outer mitochondrial membrane, is highly expressed in the heart. MUL1 is involved in multiple biological pathways associated with mitochondrial dynamics. Increased MUL1 affects the balance between fission and fusion, affecting mitochondrial function, which plays a crucial role in myocardial function. Therefore, it is interesting to evaluate the effect of cardiac-specific overexpression of MUL1 on myocardial function. Aim: To determine heart functionality in a mouse model with cardio-specific overexpression MUL1 protein. Methods and Results: Male C57BL/Tg transgenic mice with cardiomyocyte-specific overexpression of MUL1 (n=10) and control (n=4) were evaluated at 12, 27, and 35 weeks of age. Glucose tolerance curve determination was performed after a 6-hours fast to assess metabolic capacity, treadmill test, and systolic, and diastolic pressure was evaluated by the mouse tail-cuff blood pressure system equipment. The result showed no glucose tolerance curve, and the treadmill test demonstrated no significant changes between groups. However, substantial changes in diastolic function were observed by ultrasound and determination of cardiac hypertrophy proteins by western blot. Conclusions: Cardio-specific overexpression of MUL1 in mice without any treatment affects diastolic cardiac function, thus showing the important role contributed by MUL1 in the heart. Future research should evaluate the effect of cardiomyocyte-specific overexpression of MUL1 in pathological conditions such as a high-fat diet is one of the main risk factors for cardiovascular disease.

Keywords: diastolic dysfunction, hypertrophy cardiac, mitochondrial E3 ubiquitin ligase 1, MUL1

Procedia PDF Downloads 52
16463 Implementation of Model Reference Adaptive Control in Tuning of Controller Gains for Following-Vehicle System with Fixed Time Headway

Authors: Fatemeh Behbahani, Rubiyah Yusof

Abstract:

To avoid collision between following vehicles and vehicles in front, it is vital to keep appropriate, safe spacing between both vehicles over all speeds. Therefore, the following vehicle needs to have exact information regarding the speed and spacing between vehicles. This project is conducted to simulate the tuning of controller gain for a vehicle-following system through the selected control strategy, spacing control policy and fixed-time headway policy. In addition, the paper simulates and designs an adaptive gain controller for a road-vehicle-following system which uses information on the spacing, velocity and also acceleration of a preceding vehicle in the proposed one-vehicle look-ahead strategy. The mathematical model is implemented using Kirchhoff and Newton’s Laws, and stability simulated. The trial-error method was used to obtain a suitable value of controller gain. However, the adaptive-based controller system was able to optimize the gain value automatically. Model Reference Adaptive Control (MRAC) is designed and utilized and based on firstly the Gradient and secondly the Lyapunov approach. The Lyapunov approach considers stability. The Gradient approach was found to improve the best value of gain in the controller system with fixed-time headway.

Keywords: one-vehicle look-ahead, model reference adaptive, stability, tuning gain controller, MRAC

Procedia PDF Downloads 214
16462 Statistical Analysis of Rainfall Change over the Blue Nile Basin

Authors: Hany Mustafa, Mahmoud Roushdi, Khaled Kheireldin

Abstract:

Rainfall variability is an important feature of semi-arid climates. Climate change is very likely to increase the frequency, magnitude, and variability of extreme weather events such as droughts, floods, and storms. The Blue Nile Basin is facing extreme climate change-related events such as floods and droughts and its possible impacts on ecosystem, livelihood, agriculture, livestock, and biodiversity are expected. Rainfall variability is a threat to food production in the Blue Nile Basin countries. This study investigates the long-term variations and trends of seasonal and annual precipitation over the Blue Nile Basin for 102-year period (1901-2002). Six statistical trend analysis of precipitation was performed with nonparametric Mann-Kendall test and Sen's slope estimator. On the other hands, four statistical absolute homogeneity tests: Standard Normal Homogeneity Test, Buishand Range test, Pettitt test and the Von Neumann ratio test were applied to test the homogeneity of the rainfall data, using XLSTAT software, which results of p-valueless than alpha=0.05, were significant. The percentages of significant trends obtained for each parameter in the different seasons are presented. The study recommends adaptation strategies to be streamlined to relevant policies, enhancing local farmers’ adaptive capacity for facing future climate change effects.

Keywords: Blue Nile basin, climate change, Mann-Kendall test, trend analysis

Procedia PDF Downloads 510
16461 Topical Nonsteroidal Anti-Inflammatory Eye Drops and Oral Acetazolamide for Macular Edema after Uncomplicated Phacoemulsification: Outcome and Predictors of Non-Response

Authors: Wissam Aljundi, Loay Daas, Yaser Abu Dail, Barbara Käsmann-Kellner, Berthold Seitz, Alaa Din Abdin

Abstract:

Purpose: To investigate the effectiveness of nonsteroidal anti-inflammatory eye drops (NSAIDs) combined with oral acetazolamide for postoperative macular edema (PME) after uncomplicated phacoemulsification (PE) and to identify predictors of non-response. Methods: We analyzed data of uncomplicated PE and identified eyes with PME. First-line therapy included topical NSAIDs combined with oral acetazolamide. In case of non-response, triamcinolone was administered subtenonally. Outcome measures included best-corrected visual acuity (BCVA) and central macular thickness (CMT). Results: 94 eyes out of 9750 uncomplicated PE developed PME, of which 60 eyes were included. Follow-ups occurred 6.4±1.8, 12.5±3.7, and 18.6±6.0 weeks after diagnosis. BCVA and CMT improved significantly in all follow-ups. 40 eyes showed response to first-line therapy at first follow-up (G1). The remaining 20 eyes showed no response and required subtenon triamcinolone (G2), of which 11 eyes showed complete regression at the second follow-up and 4 eyes at the third follow-up. 5 eyes showed no response and required intravitreal injection. Multivariate linear regression model showed that diabetes mellitus (DM) and increased cumulative dissipated energy (CDE) are predictors of non-response. Conclusion: Topical NSAIDs with acetazolamide resulted in complete regression of PME in 67% of all cases. DM and increased CDE might be considered as predictors of nonresponse to this treatment.

Keywords: postoperative macular edema, intravitreal injection, cumulative energy, irvine gass syndrome, pseudophakie

Procedia PDF Downloads 94
16460 Big Data Analysis with RHadoop

Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim

Abstract:

It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.

Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop

Procedia PDF Downloads 412
16459 UML Model for Double-Loop Control Self-Adaptive Braking System

Authors: Heung Sun Yoon, Jong Tae Kim

Abstract:

In this paper, we present an activity diagram model for double-loop control self-adaptive braking system. Since activity diagram helps to improve visibility of self-adaption, we can easily find where improvement is needed on double-loop control. Double-loop control is adopted since the design conditions and actual conditions can be different. The system is reconfigured in runtime by using double-loop control. We simulated to verify and validate our model by using MATLAB. We compared single-loop control model with double-loop control model. Simulation results show that double-loop control provides more consistent brake power control than single-loop control.

Keywords: activity diagram, automotive, braking system, double-loop, self-adaptive, UML, vehicle

Procedia PDF Downloads 391
16458 A Comprehensive Review of Adaptive Building Energy Management Systems Based on Users’ Feedback

Authors: P. Nafisi Poor, P. Javid

Abstract:

Over the past few years, the idea of adaptive buildings and specifically, adaptive building energy management systems (ABEMS) has become popular. Well-performed management in terms of energy is to create a balance between energy consumption and user comfort; therefore, in new energy management models, efficient energy consumption is not the sole factor and the user's comfortability is also considered in the calculations. One of the main ways of measuring this factor is by analyzing user feedback on the conditions to understand whether they are satisfied with conditions or not. This paper provides a comprehensive review of recent approaches towards energy management systems based on users' feedbacks and subsequently performs a comparison between them premised upon their efficiency and accuracy to understand which approaches were more accurate and which ones resulted in a more efficient way of minimizing energy consumption while maintaining users' comfortability. It was concluded that the highest accuracy rate among the presented works was 95% accuracy in determining satisfaction and up to 51.08% energy savings can be achieved without disturbing user’s comfort. Considering the growing interest in designing and developing adaptive buildings, these studies can support diverse inquiries about this subject and can be used as a resource to support studies and researches towards efficient energy consumption while maintaining the comfortability of users.

Keywords: adaptive buildings, energy efficiency, intelligent buildings, user comfortability

Procedia PDF Downloads 111
16457 Optimization of Slider Crank Mechanism Using Design of Experiments and Multi-Linear Regression

Authors: Galal Elkobrosy, Amr M. Abdelrazek, Bassuny M. Elsouhily, Mohamed E. Khidr

Abstract:

Crank shaft length, connecting rod length, crank angle, engine rpm, cylinder bore, mass of piston and compression ratio are the inputs that can control the performance of the slider crank mechanism and then its efficiency. Several combinations of these seven inputs are used and compared. The throughput engine torque predicted by the simulation is analyzed through two different regression models, with and without interaction terms, developed according to multi-linear regression using LU decomposition to solve system of algebraic equations. These models are validated. A regression model in seven inputs including their interaction terms lowered the polynomial degree from 3rd degree to 1st degree and suggested valid predictions and stable explanations.

Keywords: design of experiments, regression analysis, SI engine, statistical modeling

Procedia PDF Downloads 159
16456 A Comparison of Outcomes of Endoscopic Retrograde Cholangiopancreatography vs. Percutaneous Transhepatic Biliary Drainage in the Management of Obstructive Jaundice from Hepatobiliary Tuberculosis: The Philippine General Hospital Experience

Authors: Margaret Elaine J. Villamayor, Lobert A. Padua, Neil S. Bacaltos, Virgilio P. Bañez

Abstract:

Significance: This study aimed to determine the prevalence of Hepatobiliary Tuberculosis (HBTB) with biliary obstruction and to compare the outcomes of ERCP versus PTBD in these patients. Methodology: This is a cross-sectional study involving patients from PGH who underwent biliary drainage from HBTB from January 2009 to June 2014. HBTB was defined as having evidence of TB (culture, smear, PCR, histology) or clinical diagnosis with the triad of jaundice, fever, and calcifications on imaging with other causes of jaundice excluded. The primary outcome was successful drainage and secondary outcomes were mean hospital stay and complications. Simple logistic regression was used to identify factors associated with success of drainage, z-test for two proportions to compare outcomes of ERCP versus PTBD and t-test to compare mean hospital stay post-procedure. Results: There were 441 patients who underwent ERCP and PTBD, 19 fulfilled the inclusion criteria. 11 underwent ERCP while 8 had PTBD. There were more successful cases in PTBD versus ERCP but this was not statistically significant (p-value 0.3615). Factors such as age, gender, location and nature of obstruction, vices, coexisting pulmonary or other extrapulmonary TB and presence of portal hypertension did not affect success rates in these patients. The PTBD group had longer mean hospital stay but this was not significant (p-value 0.1880). There were no complications reported in both groups. Conclusion: HBTB comprises 4.3% of the patients undergoing biliary drainage in PGH. Both ERCP and PTBD are equally safe and effective in the management of biliary obstruction from HBTB.

Keywords: cross-sectional, hepatobiliary tuberculosis, obstructive jaundice, endoscopic retrograde cholangiopancreatography, percutaneous transhepatic biliary drainage

Procedia PDF Downloads 421
16455 An Epsilon Hierarchical Fuzzy Twin Support Vector Regression

Authors: Arindam Chaudhuri

Abstract:

The research presents epsilon- hierarchical fuzzy twin support vector regression (epsilon-HFTSVR) based on epsilon-fuzzy twin support vector regression (epsilon-FTSVR) and epsilon-twin support vector regression (epsilon-TSVR). Epsilon-FTSVR is achieved by incorporating trapezoidal fuzzy numbers to epsilon-TSVR which takes care of uncertainty existing in forecasting problems. Epsilon-FTSVR determines a pair of epsilon-insensitive proximal functions by solving two related quadratic programming problems. The structural risk minimization principle is implemented by introducing regularization term in primal problems of epsilon-FTSVR. This yields dual stable positive definite problems which improves regression performance. Epsilon-FTSVR is then reformulated as epsilon-HFTSVR consisting of a set of hierarchical layers each containing epsilon-FTSVR. Experimental results on both synthetic and real datasets reveal that epsilon-HFTSVR has remarkable generalization performance with minimum training time.

Keywords: regression, epsilon-TSVR, epsilon-FTSVR, epsilon-HFTSVR

Procedia PDF Downloads 339
16454 Multinomial Dirichlet Gaussian Process Model for Classification of Multidimensional Data

Authors: Wanhyun Cho, Soonja Kang, Sanggoon Kim, Soonyoung Park

Abstract:

We present probabilistic multinomial Dirichlet classification model for multidimensional data and Gaussian process priors. Here, we have considered an efficient computational method that can be used to obtain the approximate posteriors for latent variables and parameters needed to define the multiclass Gaussian process classification model. We first investigated the process of inducing a posterior distribution for various parameters and latent function by using the variational Bayesian approximations and important sampling method, and next we derived a predictive distribution of latent function needed to classify new samples. The proposed model is applied to classify the synthetic multivariate dataset in order to verify the performance of our model. Experiment result shows that our model is more accurate than the other approximation methods.

Keywords: multinomial dirichlet classification model, Gaussian process priors, variational Bayesian approximation, importance sampling, approximate posterior distribution, marginal likelihood evidence

Procedia PDF Downloads 412
16453 Evolved Bat Algorithm Based Adaptive Fuzzy Sliding Mode Control with LMI Criterion

Authors: P.-W. Tsai, C.-Y. Chen, C.-W. Chen

Abstract:

In this paper, the stability analysis of a GA-Based adaptive fuzzy sliding model controller for a nonlinear system is discussed. First, a nonlinear plant is well-approximated and described with a reference model and a fuzzy model, both involving FLC rules. Then, FLC rules and the consequent parameter are decided on via an Evolved Bat Algorithm (EBA). After this, we guarantee a new tracking performance inequality for the control system. The tracking problem is characterized to solve an eigenvalue problem (EVP). Next, an adaptive fuzzy sliding model controller (AFSMC) is proposed to stabilize the system so as to achieve good control performance. Lyapunov’s direct method can be used to ensure the stability of the nonlinear system. It is shown that the stability analysis can reduce nonlinear systems into a linear matrix inequality (LMI) problem. Finally, a numerical simulation is provided to demonstrate the control methodology.

Keywords: adaptive fuzzy sliding mode control, Lyapunov direct method, swarm intelligence, evolved bat algorithm

Procedia PDF Downloads 417
16452 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis

Authors: Jui-Teng Liao

Abstract:

The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.

Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format

Procedia PDF Downloads 50
16451 An Adaptive Distributed Incremental Association Rule Mining System

Authors: Adewale O. Ogunde, Olusegun Folorunso, Adesina S. Sodiya

Abstract:

Most existing Distributed Association Rule Mining (DARM) systems are still facing several challenges. One of such challenges that have not received the attention of many researchers is the inability of existing systems to adapt to constantly changing databases and mining environments. In this work, an Adaptive Incremental Mining Algorithm (AIMA) is therefore proposed to address these problems. AIMA employed multiple mobile agents for the entire mining process. AIMA was designed to adapt to changes in the distributed databases by mining only the incremental database updates and using this to update the existing rules in order to improve the overall response time of the DARM system. In AIMA, global association rules were integrated incrementally from one data site to another through Results Integration Coordinating Agents. The mining agents in AIMA were made adaptive by defining mining goals with reasoning and behavioral capabilities and protocols that enabled them to either maintain or change their goals. AIMA employed Java Agent Development Environment Extension for designing the internal agents’ architecture. Results from experiments conducted on real datasets showed that the adaptive system, AIMA performed better than the non-adaptive systems with lower communication costs and higher task completion rates.

Keywords: adaptivity, data mining, distributed association rule mining, incremental mining, mobile agents

Procedia PDF Downloads 370
16450 Comparative Analysis of Islamic Bank in Indonesia and Malaysia with Risk Profile, Good Corporate Governance, Earnings, and Capital Method: Performance of Business Function and Social Function Perspective

Authors: Achsania Hendratmi, Nisful Laila, Fatin Fadhilah Hasib, Puji Sucia Sukmaningrum

Abstract:

This study aims to compare and see the differences between Islamic bank in Indonesia and Islamic bank in Malaysia using RGEC method (Risk Profile, Good Corporate Governance, Earnings, and Capital). This study examines the comparison in business and social performance of eleven Islamic banks in Indonesia and fifteen Islamic banks in Malaysia. This research used quantitative approach and the collections of data was done by collecting all the annual reports of banks that has been created as a sample over the period 2011-2015. The test result of the Independent Samples T-test and Mann-Whitney Test showed there were differences in the business performance of Islamic Bank in Indonesia and Malaysia as seen from the aspect of Risk profile (FDR), GCG, and Earnings (ROA). Also, there were differences of business and social performance as seen from Earnings (ROE), Capital (CAR), and Sharia Conformity Indicator (PSR and ZR) aspects.

Keywords: business performance, Islamic banks, RGEC, social performance

Procedia PDF Downloads 268