Search results for: psychometric validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1409

Search results for: psychometric validation

1169 The Validation of RadCalc for Clinical Use: An Independent Monitor Unit Verification Software

Authors: Junior Akunzi

Abstract:

In the matter of patient treatment planning quality assurance in 3D conformational therapy (3D-CRT) and volumetric arc therapy (VMAT or RapidArc), the independent monitor unit verification calculation (MUVC) is an indispensable part of the process. Concerning 3D-CRT treatment planning, the MUVC can be performed manually applying the standard ESTRO formalism. However, due to the complex shape and the amount of beams in advanced treatment planning technic such as RapidArc, the manual independent MUVC is inadequate. Therefore, commercially available software such as RadCalc can be used to perform the MUVC in complex treatment planning been. Indeed, RadCalc (version 6.3 LifeLine Inc.) uses a simplified Clarkson algorithm to compute the dose contribution for individual RapidArc fields to the isocenter. The purpose of this project is the validation of RadCalc in 3D-CRT and RapidArc for treatment planning dosimetry quality assurance at Antoine Lacassagne center (Nice, France). Firstly, the interfaces between RadCalc and our treatment planning systems (TPS) Isogray (version 4.2) and Eclipse (version13.6) were checked for data transfer accuracy. Secondly, we created test plans in both Isogray and Eclipse featuring open fields, wedges fields, and irregular MLC fields. These test plans were transferred from TPSs according to the radiotherapy protocol of DICOM RT to RadCalc and the linac via Mosaiq (version 2.5). Measurements were performed in water phantom using a PTW cylindrical semiflex ionisation chamber (0.3 cm³, 31010) and compared with the TPSs and RadCalc calculation. Finally, 30 3D-CRT plans and 40 RapidArc plans created with patients CT scan were recalculated using the CT scan of a solid PMMA water equivalent phantom for 3D-CRT and the Octavius II phantom (PTW) CT scan for RapidArc. Next, we measure the doses delivered into these phantoms for each plan with a 0.3 cm³ PTW 31010 cylindrical semiflex ionisation chamber (3D-CRT) and 0.015 cm³ PTW PinPoint ionisation chamber (Rapidarc). For our test plans, good agreements were found between calculation (RadCalc and TPSs) and measurement (mean: 1.3%; standard deviation: ± 0.8%). Regarding the patient plans, the measured doses were compared to the calculation in RadCalc and in our TPSs. Moreover, RadCalc calculations were compared to Isogray and Eclispse ones. Agreements better than (2.8%; ± 1.2%) were found between RadCalc and TPSs. As for the comparison between calculation and measurement the agreement for all of our plans was better than (2.3%; ± 1.1%). The independent MU verification calculation software RadCal has been validated for clinical use and for both 3D-CRT and RapidArc techniques. The perspective of this project includes the validation of RadCal for the Tomotherapy machine installed at centre Antoine Lacassagne.

Keywords: 3D conformational radiotherapy, intensity modulated radiotherapy, monitor unit calculation, dosimetry quality assurance

Procedia PDF Downloads 187
1168 The Study of Power as a Pertinent Motive among Tribal College Students of Assam

Authors: K. P. Gogoi

Abstract:

The current research study investigates the motivational pattern viz Power motivation among the tribal college students of Assam. The sample consisted of 240 college students (120 tribal and 120 non-tribal) ranging from 18-24 years, 60 males and 60 females for both tribal’s and non-tribal’s. Attempts were made to include all the prominent tribes of Assam viz. Thematic Apperception Test, Power motive Scale and a semi structured interview schedule were used to gather information about their family types, parental deprivation, parental relations, social and political belongingness. Mean, Standard Deviation, and t-test were the statistical measures adopted in this 2x2 factorial design study. In addition to this discriminant analysis has been worked out to strengthen the predictive validity of the obtained data. TAT scores reveal significant difference between the tribal’s and non-tribal on power motivation. However results obtained on gender difference indicates similar scores among both the cultures. Cross validation of the TAT results was done by using the power motive scale by T. S. Dapola which confirms the results on need for power through TAT scores. Power motivation has been studied in three directions i.e. coercion, inducement and restraint. An interesting finding is that on coercion tribal’s score high showing significant difference whereas in inducement or seduction the non-tribal’s scored high showing significant difference. On the other hand on restraint no difference exists between both cultures. Discriminant analysis has been worked out between the variables n-power, coercion, inducement and restraint. Results indicated that inducement or seduction (.502) is the dependent measure which has the most discriminating power between these two cultures.

Keywords: power motivation, tribal, social, political, predictive validity, cross validation, coercion, inducement, restraint

Procedia PDF Downloads 459
1167 The Development of Liquid Chromatography Tandem Mass Spectrometry Method for Citrinin Determination in Dry-Fermented Meat Products

Authors: Ana Vulic, Tina Lesic, Nina Kudumija, Maja Kis, Manuela Zadravec, Nada Vahcic, Tomaz Polak, Jelka Pleadin

Abstract:

Mycotoxins are toxic secondary metabolites produced by numerous types of molds. They can contaminate both food and feed so that they represent a serious public health concern. Production of dry-fermented meat products involves ripening, during which molds can overgrow the product surface, produce mycotoxins, and consequently contaminate the final product. Citrinin is a mycotoxin produced mainly by the Penicillium citrinum. Data on citrinin occurrence in both food and feed are limited. Therefore, there is a need for research on citrinin occurrence in these types of meat products. The LC-MS/MS method for citrinin determination was developed and validated. Sample preparation was performed using immunoaffinity columns, which resulted in clean sample extracts. Method validation included the determination of the limit of detection (LOD), the limit of quantification (LOQ), recovery, linearity, and matrix effect in accordance to the latest validation guidance. The determined LOD and LOQ were 0.60 µg/kg and 1.98 µg/kg, respectively, showing a good method sensitivity. The method was tested for its linearity in the calibration range of 1 µg/L to 10 µg/L. The recovery was 100.9 %, while the matrix effect was 0.7 %. This method was employed in the analysis of 47 samples of dry-fermented sausages collected from local households. Citrinin wasn’t detected in any of these samples, probably because of the short ripening period of the tested sausages that takes three months tops. The developed method shall be used to test other types of traditional dry-cured products, such as prosciuttos, whose surface is usually more heavily overgrown by surface molds due to the longer ripening period.

Keywords: citrinin, dry-fermented meat products, LC-MS/MS, mycotoxins

Procedia PDF Downloads 90
1166 Predicting High-Risk Endometrioid Endometrial Carcinomas Using Protein Markers

Authors: Yuexin Liu, Gordon B. Mills, Russell R. Broaddus, John N. Weinstein

Abstract:

The lethality of endometrioid endometrial cancer (EEC) is primarily attributable to the high-stage diseases. However, there are no available biomarkers that predict EEC patient staging at the time of diagnosis. We aim to develop a predictive scheme to help in this regards. Using reverse-phase protein array expression profiles for 210 EEC cases from The Cancer Genome Atlas (TCGA), we constructed a Protein Scoring of EEC Staging (PSES) scheme for surgical stage prediction. We validated and evaluated its diagnostic potential in an independent cohort of 184 EEC cases obtained at MD Anderson Cancer Center (MDACC) using receiver operating characteristic curve analyses. Kaplan-Meier survival analysis was used to examine the association of PSES score with patient outcome, and Ingenuity pathway analysis was used to identify relevant signaling pathways. Two-sided statistical tests were used. PSES robustly distinguished high- from low-stage tumors in the TCGA cohort (area under the ROC curve [AUC]=0.74; 95% confidence interval [CI], 0.68 to 0.82) and in the validation cohort (AUC=0.67; 95% CI, 0.58 to 0.76). Even among grade 1 or 2 tumors, PSES was significantly higher in high- than in low-stage tumors in both the TCGA (P = 0.005) and MDACC (P = 0.006) cohorts. Patients with positive PSES score had significantly shorter progression-free survival than those with negative PSES in the TCGA (hazard ratio [HR], 2.033; 95% CI, 1.031 to 3.809; P = 0.04) and validation (HR, 3.306; 95% CI, 1.836 to 9.436; P = 0.0007) cohorts. The ErbB signaling pathway was most significantly enriched in the PSES proteins and downregulated in high-stage tumors. PSES may provide clinically useful prediction of high-risk tumors and offer new insights into tumor biology in EEC.

Keywords: endometrial carcinoma, protein, protein scoring of EEC staging (PSES), stage

Procedia PDF Downloads 195
1165 Validation of an Acuity Measurement Tool for Maternity Services

Authors: Cherrie Lowe

Abstract:

The TrendCare Patient Dependency System is currently utilized by a large number of Maternity Services across Australia, New Zealand and Singapore. In 2012, 2013, and 2014 validation studies were initiated in all three countries to validate the acuity tools used for Women in Labour, and Postnatal Mothers and Babies. This paper will present the findings of the validation study. Aim: The aim of this study was to; Identify if the care hours provided by the TrendCare Acuity System was an accurate reflection of the care required by Women and Babies. Obtain evidence of changes required to acuity indicators and/or category timings to ensure the TrendCare acuity system remains reliable and valid across a range of Maternity care models in three countries. Method: A non-experimental action research methodology was used across four District Health Boards in New Zealand, two large public Australian Maternity services and a large tertiary Maternity service in Singapore. Standardized data collection forms and timing devices were used to collect Midwife contact times with Women and Babies included in the study. Rejection processes excluded samples where care was not completed/rationed. The variances between actual timed Midwife/Mother/Baby contact and actual Trend Care acuity times were identified and investigated. Results: 87.5% (18) of TrendCare acuity category timings matched the actual timings recorded for Midwifery care. 12.5% (3) of TrendCare night duty categories provided less minutes of care than the actual timings. 100% of Labour Ward TrendCare categories matched actual timings for Midwifery care. The actual times given for assistance to New Zealand independent Midwives in Labour Ward showed a significant deviation to previous studies demonstrating the need for additional time allocations in Trend Care. Conclusion: The results demonstrated the importance of regularly validating the Trend Care category timings with the care hours required, as variances to models of care and length of stay in Maternity units have increased Midwifery workloads on the night shift. The level of assistance provided by the core labour ward staff to the Independent Midwife has increased substantially. Outcomes: As a consequence of this study changes were made to the night duty TrendCare Maternity categories, additional acuity indicators developed and times for assisting independent Midwives increased. The updated TrendCare version was delivered to Maternity services in 2014.

Keywords: maternity, acuity, research, nursing workloads

Procedia PDF Downloads 342
1164 Utility of Geospatial Techniques in Delineating Groundwater-Dependent Ecosystems in Arid Environments

Authors: Mangana B. Rampheri, Timothy Dube, Farai Dondofema, Tatenda Dalu

Abstract:

Identifying and delineating groundwater-dependent ecosystems (GDEs) is critical to the well understanding of the GDEs spatial distribution as well as groundwater allocation. However, this information is inadequately understood due to limited available data for the most area of concerns. Thus, this study aims to address this gap using remotely sensed, analytical hierarchy process (AHP) and in-situ data to identify and delineate GDEs in Khakea-Bray Transboundary Aquifer. Our study developed GDEs index, which integrates seven explanatory variables, namely, Normalized Difference Vegetation Index (NDVI), Modified Normalized Difference Water Index (MNDWI), Land-use and landcover (LULC), slope, Topographic Wetness Index (TWI), flow accumulation and curvature. The GDEs map was delineated using the weighted overlay tool in ArcGIS environments. The map was spatially classified into two classes, namely, GDEs and Non-GDEs. The results showed that only 1,34 % (721,91 km2) of the area is characterised by GDEs. Finally, groundwater level (GWL) data was used for validation through correlation analysis. Our results indicated that: 1) GDEs are concentrated at the northern, central, and south-western part of our study area, and 2) the validation results showed that GDEs classes do not overlap with GWL located in the 22 boreholes found in the given area. However, the results show a possible delineation of GDEs in the study area using remote sensing and GIS techniques along with AHP. The results of this study further contribute to identifying and delineating priority areas where appropriate water conservation programs, as well as strategies for sustainable groundwater development, can be implemented.

Keywords: analytical hierarchy process (AHP), explanatory variables, groundwater-dependent ecosystems (GDEs), khakea-bray transboundary aquifer, sentinel-2

Procedia PDF Downloads 77
1163 The Effectiveness of Water Indices in Detecting Soil Moisture as an Indicator of Mudflow in Arid Regions

Authors: Zahraa Al Ali, Ammar Abulibdeh, Talal Al-Awadhi, Midhun Mohan, Mohammed Al-Barwani, Mohammed Al-Barwani, Sara Al Nabbi, Meshal Abdullah

Abstract:

This study aims to evaluate the performance and effectiveness of six spectral water indices - derived from Multispectral sentinel-2 data - to detect soil moisture and inundated area in arid regions to be used as an indicator of mudflow phenomena to predict high-risk areas. Herein, the validation of the performance of spectral indices was conducted using threshold method, spectral curve performance, and soil-line method. These indirect validation techniques play a key role in saving time, effort, and cost, particularly for large-scale and inaccessible areas. It was observed that the Normalized Difference Water Index (NDWI), Modified Normalized Difference Water Index (mNDWI), and RSWIR indices have the potential to detect soil moisture and inundated areas in arid regions. According to the temporal spectral curve performance, the spectral characteristics of water and soil moisture were distinct in the Near infrared (NIR), Short-wave Infrared (SWIR1,2) bands. However, the rate and degree differed between these bands, depending on the amount of water in the soil. Furthermore, the soil line method supported the appropriate selection of threshold values to detect soil moisture. However, the threshold values varied with location, time, season, and between indices. We concluded that considering the factors influencing the behavior of water and soil reflectivity could support decision-makers in identifying high-risk mudflow locations.

Keywords: spectral reflectance curve, soil-line method, spectral indices, Shaheen cyclone

Procedia PDF Downloads 38
1162 Development and Validation Method for Quantitative Determination of Rifampicin in Human Plasma and Its Application in Bioequivalence Test

Authors: Endang Lukitaningsih, Fathul Jannah, Arief R. Hakim, Ratna D. Puspita, Zullies Ikawati

Abstract:

Rifampicin is a semisynthetic antibiotic derivative of rifamycin B produced by Streptomyces mediterranei. RIF has been used worldwide as first line drug-prescribed throughout tuberculosis therapy. This study aims to develop and to validate an HPLC method couple with a UV detection for determination of rifampicin in spiked human plasma and its application for bioequivalence study. The chromatographic separation was achieved on an RP-C18 column (LachromHitachi, 250 x 4.6 mm., 5μm), utilizing a mobile phase of phosphate buffer/acetonitrile (55:45, v/v, pH 6.8 ± 0.1) at a flow of 1.5 mL/min. Detection was carried out at 337 nm by using spectrophotometer. The developed method was statistically validated for the linearity, accuracy, limit of detection, limit of quantitation, precise and specifity. The specifity of the method was ascertained by comparing chromatograms of blank plasma and plasma containing rifampicin; the matrix and rifampicin were well separated. The limit of detection and limit of quantification were 0.7 µg/mL and 2.3 µg/mL, respectively. The regression curve of standard was linear (r > 0.999) over a range concentration of 20.0 – 100.0 µg/mL. The mean recovery of the method was 96.68 ± 8.06 %. Both intraday and interday precision data showed reproducibility (R.S.D. 2.98% and 1.13 %, respectively). Therefore, the method can be used for routine analysis of rifampicin in human plasma and in bioequivalence study. The validated method was successfully applied in pharmacokinetic and bioequivalence study of rifampicin tablet in a limited number of subjects (under an Ethical Clearance No. KE/FK/6201/EC/2015). The mean values of Cmax, Tmax, AUC(0-24) and AUC(o-∞) for the test formulation of rifampicin were 5.81 ± 0.88 µg/mL, 1.25 hour, 29.16 ± 4.05 µg/mL. h. and 29.41 ± 4.07 µg/mL. h., respectively. Meanwhile for the reference formulation, the values were 5.04 ± 0.54 µg/mL, 1.31 hour, 27.20 ± 3.98 µg/mL.h. and 27.49 ± 4.01 µg/mL.h. From bioequivalence study, the 90% CIs for the test formulation/reference formulation ratio for the logarithmic transformations of Cmax and AUC(0-24) were 97.96-129.48% and 99.13-120.02%, respectively. According to the bioequivamence test guidelines of the European Commission-European Medicines Agency, it can be concluded that the test formulation of rifampicin is bioequivalence with the reference formulation.

Keywords: validation, HPLC, plasma, bioequivalence

Procedia PDF Downloads 269
1161 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 31
1160 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 54
1159 The Prognostic Prediction Value of Positive Lymph Nodes Numbers for the Hypopharyngeal Squamous Cell Carcinoma

Authors: Wendu Pang, Yaxin Luo, Junhong Li, Yu Zhao, Danni Cheng, Yufang Rao, Minzi Mao, Ke Qiu, Yijun Dong, Fei Chen, Jun Liu, Jian Zou, Haiyang Wang, Wei Xu, Jianjun Ren

Abstract:

We aimed to compare the prognostic prediction value of positive lymph node number (PLNN) to the American Joint Committee on Cancer (AJCC) tumor, lymph node, and metastasis (TNM) staging system for patients with hypopharyngeal squamous cell carcinoma (HPSCC). A total of 826 patients with HPSCC from the Surveillance, Epidemiology, and End Results database (2004–2015) were identified and split into two independent cohorts: training (n=461) and validation (n=365). Univariate and multivariate Cox regression analyses were used to evaluate the prognostic effects of PLNN in patients with HPSCC. We further applied six Cox regression models to compare the survival predictive values of the PLNN and AJCC TNM staging system. PLNN showed a significant association with overall survival (OS) and cancer-specific survival (CSS) (P < 0.001) in both univariate and multivariable analyses, and was divided into three groups (PLNN 0, PLNN 1-5, and PLNN>5). In the training cohort, multivariate analysis revealed that the increased PLNN of HPSCC gave rise to significantly poor OS and CSS after adjusting for age, sex, tumor size, and cancer stage; this trend was also verified by the validation cohort. Additionally, the survival model incorporating a composite of PLNN and TNM classification (C-index, 0.705, 0.734) performed better than the PLNN and AJCC TNM models. PLNN can serve as a powerful survival predictor for patients with HPSCC and is a surrogate supplement for cancer staging systems.

Keywords: hypopharyngeal squamous cell carcinoma, positive lymph nodes number, prognosis, prediction models, survival predictive values

Procedia PDF Downloads 111
1158 Analytical Method for Seismic Analysis of Shaft-Tunnel Junction under Longitudinal Excitations

Authors: Jinghua Zhang

Abstract:

Shaft-tunnel junction is a typical case of the structural nonuniformity in underground structures. The shaft and the tunnel possess greatly different structural features. Even under uniform excitations, they tend to behave discrepantly. Studies on shaft-tunnel junctions are mainly performed numerically. Shaking table tests are also conducted. Although many numerical and experimental data are obtained, an analytical solution still has great merits of gaining more insights into the shaft-tunnel problem. This paper will try to remedy the situation. Since the seismic responses of shaft-tunnel junctions are very related to directions of the excitations, they are studied in two scenarios: the longitudinal-excitation scenario and the transverse-excitation scenario. The former scenario will be addressed in this paper. Given that responses of the tunnel are highly dependent on the shaft, the analytical solutions would be developed firstly for the vertical shaft. Then, the seismic responses of the tunnel would be discussed. Since vertical shafts bear a resemblance to rigid caissons, the solution proposed in this paper is derived by introducing terms of shaft-tunnel and soil-tunnel interactions into equations originally developed for rigid caissons. The validity of the solution is examined by a validation model computed by finite element method. The mutual influence between the shaft and the tunnel is introduced. The soil-structure interactions are discussed parametrically based on the proposed equations. The shaft-tunnel relative displacement and the soil-tunnel relative stiffness are found to be the most important parameters affecting the magnitudes and distributions of the internal forces of the tunnel. A hinge-joint at the shaft-tunnel junction could significantly reduce the degree of stress concentration compared with a rigid joint.

Keywords: analytical solution, longitudinal excitation, numerical validation , shaft-tunnel junction

Procedia PDF Downloads 126
1157 An Assessment of Finite Element Computations in the Structural Analysis of Diverse Coronary Stent Types: Identifying Prerequisites for Advancement

Authors: Amir Reza Heydari, Yaser Jenab

Abstract:

Coronary artery disease, a common cardiovascular disease, is attributed to the accumulation of cholesterol-based plaques in the coronary arteries, leading to atherosclerosis. This disease is associated with risk factors such as smoking, hypertension, diabetes, and elevated cholesterol levels, contributing to severe clinical consequences, including acute coronary syndromes and myocardial infarction. Treatment approaches such as from lifestyle interventions to surgical procedures like percutaneous coronary intervention and coronary artery bypass surgery. These interventions often employ stents, including bare-metal stents (BMS), drug-eluting stents (DES), and bioresorbable vascular scaffolds (BVS), each with its advantages and limitations. Computational tools have emerged as critical in optimizing stent designs and assessing their performance. The aim of this study is to provide an overview of the computational methods of studies based on the finite element (FE) method in the field of coronary stenting and discuss the potential for development and clinical application of stent devices. Additionally, the importance of assessing the ability of computational models is emphasized to represent real-world phenomena, supported by recent guidelines from the American Society of Mechanical Engineers (ASME). Validation processes proposed include comparing model performance with in vivo, ex-vivo, or in vitro data, alongside uncertainty quantification and sensitivity analysis. These methods can enhance the credibility and reliability of in silico simulations, ultimately aiding in the assessment of coronary stent designs in various clinical contexts.

Keywords: atherosclerosis, materials, restenosis, review, validation

Procedia PDF Downloads 48
1156 Development of a Scale for Evaluating the Efficacy of Vacationing

Authors: Ju Yeon Lee, Seol Ah Oh, Hong il Kim, Hae Yong Do, Sung Won Choi

Abstract:

The purpose of this study was to develop a Well-being and Moments Scale (WAMS) for evaluating the efficacy of ‘vacationing’ as a form of mental health recuperation. ‘Vacationing’ is defined as a going outside one’s usual environment to seek refreshment and relief from one’s daily life. To develop WAMS, we followed recommended procedures for scale development, including reviewing related studies, conducting focus group interviews to elucidate the need for this assessment area, and modifying items based on expert opinion. Through this process, we developed the WAMS. The psychometric properties of the WAMS were then tested in two separate samples. Exploratory factor analysis (EFA) was conducted using 1.41 participants (mean age = 30.45 years; range: 20-50 years) to identify the underlying 3-factor structure of 'Positive Emotions', 'Life Satisfaction' and 'Self-Confidence.' The 26 items retained based on the EFA procedures were associated with excellent reliability (i.e., α = 0.93). Confirmatory factor analysis was then conducted using 200 different participants (mean age = 29.51 years; range: 20-50 years) and revealed good model fit for our hypothesized 3-factor model. Convergent validity tests also revealed correlations with other scales in the expected direction and range. Study limitations as well as the importance and utility of WMAS are also discussed.

Keywords: vacationing, positive affect, life satisfaction, self-confidence, WAMS

Procedia PDF Downloads 314
1155 Evaluating the Terrace Benefits of Erosion in a Terraced-Agricultural Watershed for Sustainable Soil and Water Conservation

Authors: Sitarrine Thongpussawal, Hui Shao, Clark Gantzer

Abstract:

Terracing is a conservation practice to reduce erosion and widely used for soil and water conservation throughout the world but is relatively expensive. A modification of the Soil and Water Assessment Tool (called SWAT-Terrace or SWAT-T) explicitly aims to improve the simulation of the hydrological process of erosion from the terraces. SWAT-T simulates erosion from the terraces by separating terraces into three segments instead of evaluating the entire terrace. The objective of this work is to evaluate the terrace benefits on erosion from the Goodwater Creek Experimental Watershed (GCEW) at watershed and Hydrologic Response Unit (HRU) scales using SWAT-T. The HRU is the smallest spatial unit of the model, which lumps all similar land uses, soils, and slopes within a sub-basin. The SWAT-T model was parameterized for slope length, steepness and the empirical Universal Soil Erosion Equation support practice factor for three terrace segments. Data from 1993-2010 measured at the watershed outlet were used to evaluate the models for calibration and validation. Results of SWAT-T calibration showed good performance between measured and simulated erosion for the monthly time step, but poor performance for SWAT-T validation. This is probably because of large storms in spring 2002 that prevented planting, causing poorly simulated scheduling of actual field operations. To estimate terrace benefits on erosion, models were compared with and without terraces. Results showed that SWAT-T showed significant ~3% reduction in erosion (Pr <0.01) at the watershed scale and ~12% reduction in erosion at the HRU scale. Studies using the SWAT-T model indicated that the terraces have advantages to reduce erosion from terraced-agricultural watersheds. SWAT-T can be used in the evaluation of erosion to sustainably conserve the soil and water.

Keywords: Erosion, Modeling, Terraces, SWAT

Procedia PDF Downloads 167
1154 Storage System Validation Study for Raw Cocoa Beans Using Minitab® 17 and R (R-3.3.1)

Authors: Anthony Oppong Kyekyeku, Sussana Antwi-Boasiako, Emmanuel De-Graft Johnson Owusu Ansah

Abstract:

In this observational study, the performance of a known conventional storage system was tested and evaluated for fitness for its intended purpose. The system has a scope extended for the storage of dry cocoa beans. System sensitivity, reproducibility and uncertainties are not known in details. This study discusses the system performance in the context of existing literature on factors that influence the quality of cocoa beans during storage. Controlled conditions were defined precisely for the system to give reliable base line within specific established procedures. Minitab® 17 and R statistical software (R-3.3.1) were used for the statistical analyses. The approach to the storage system testing was to observe and compare through laboratory test methods the quality of the cocoa beans samples before and after storage. The samples were kept in Kilner jars and the temperature of the storage environment controlled and monitored over a period of 408 days. Standard test methods use in international trade of cocoa such as the cut test analysis, moisture determination with Aqua boy KAM III model and bean count determination were used for quality assessment. The data analysis assumed the entire population as a sample in order to establish a reliable baseline to the data collected. The study concluded a statistically significant mean value at 95% Confidence Interval (CI) for the performance data analysed before and after storage for all variables observed. Correlational graphs showed a strong positive correlation for all variables investigated with the exception of All Other Defect (AOD). The weak relationship between the before and after data for AOD had an explained variability of 51.8% with the unexplained variability attributable to the uncontrolled condition of hidden infestation before storage. The current study concluded with a high-performance criterion for the storage system.

Keywords: benchmarking performance data, cocoa beans, hidden infestation, storage system validation

Procedia PDF Downloads 142
1153 Applying Resilience Engineering to improve Safety Management in a Construction Site: Design and Validation of a Questionnaire

Authors: M. C. Pardo-Ferreira, J. C. Rubio-Romero, M. Martínez-Rojas

Abstract:

Resilience Engineering is a new paradigm of safety management that proposes to change the way of managing the safety to focus on the things that go well instead of the things that go wrong. Many complex and high-risk sectors such as air traffic control, health care, nuclear power plants, railways or emergencies, have applied this new vision of safety and have obtained very positive results. In the construction sector, safety management continues to be a problem as indicated by the statistics of occupational injuries worldwide. Therefore, it is important to improve safety management in this sector. For this reason, it is proposed to apply Resilience Engineering to the construction sector. The Construction Phase Health and Safety Plan emerges as a key element for the planning of safety management. One of the key tools of Resilience Engineering is the Resilience Assessment Grid that allows measuring the four essential abilities (respond, monitor, learn and anticipate) for resilient performance. The purpose of this paper is to develop a questionnaire based on the Resilience Assessment Grid, specifically on the ability to learn, to assess whether a Construction Phase Health and Safety Plans helps companies in a construction site to implement this ability. The research process was divided into four stages: (i) initial design of a questionnaire, (ii) validation of the content of the questionnaire, (iii) redesign of the questionnaire and (iii) application of the Delphi method. The questionnaire obtained could be used as a tool to help construction companies to evolve from Safety-I to Safety-II. In this way, companies could begin to develop the ability to learn, which will serve as a basis for the development of the other abilities necessary for resilient performance. The following steps in this research are intended to develop other questions that allow evaluating the rest of abilities for resilient performance such as monitoring, learning and anticipating.

Keywords: resilience engineering, construction sector, resilience assessment grid, construction phase health and safety plan

Procedia PDF Downloads 110
1152 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling

Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed

Abstract:

The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.

Keywords: streamflow, neural network, optimisation, algorithm

Procedia PDF Downloads 114
1151 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 116
1150 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods

Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo

Abstract:

The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.

Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines

Procedia PDF Downloads 581
1149 Performance Comparison and Visualization of COMSOL Multiphysics, Matlab, and Fortran for Predicting the Reservoir Pressure on Oil Production in a Multiple Leases Reservoir with Boundary Element Method

Authors: N. Alias, W. Z. W. Muhammad, M. N. M. Ibrahim, M. Mohamed, H. F. S. Saipol, U. N. Z. Ariffin, N. A. Zakaria, M. S. Z. Suardi

Abstract:

This paper presents the performance comparison of some computation software for solving the boundary element method (BEM). BEM formulation is the numerical technique and high potential for solving the advance mathematical modeling to predict the production of oil well in arbitrarily shaped based on multiple leases reservoir. The limitation of data validation for ensuring that a program meets the accuracy of the mathematical modeling is considered as the research motivation of this paper. Thus, based on this limitation, there are three steps involved to validate the accuracy of the oil production simulation process. In the first step, identify the mathematical modeling based on partial differential equation (PDE) with Poisson-elliptic type to perform the BEM discretization. In the second step, implement the simulation of the 2D BEM discretization using COMSOL Multiphysic and MATLAB programming languages. In the last step, analyze the numerical performance indicators for both programming languages by using the validation of Fortran programming. The performance comparisons of numerical analysis are investigated in terms of percentage error, comparison graph and 2D visualization of pressure on oil production of multiple leases reservoir. According to the performance comparison, the structured programming in Fortran programming is the alternative software for implementing the accurate numerical simulation of BEM. As a conclusion, high-level language for numerical computation and numerical performance evaluation are satisfied to prove that Fortran is well suited for capturing the visualization of the production of oil well in arbitrarily shaped.

Keywords: performance comparison, 2D visualization, COMSOL multiphysic, MATLAB, Fortran, modelling and simulation, boundary element method, reservoir pressure

Procedia PDF Downloads 462
1148 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR

Authors: Pascal Mwenge, Tumisang Seodigeng

Abstract:

The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.

Keywords: biodiesel, calibration, chemometrics, methanolysis, multivariate analysis, transesterification, FTIR

Procedia PDF Downloads 124
1147 Social Networks in a Communication Strategy of a Large Company

Authors: Kherbache Mehdi

Abstract:

Within the framework of the validation of the Master in business administration marketing and sales in INSIM institute international in management Blida, we get the opportunity to do a professional internship in Sonelgaz Enterprise and a thesis. The thesis deals with the integration of social networking in the communication strategy of a company. The problematic is: How communicate with social network can be a solution for companies? The challenges stressed by this thesis were to suggest limits and recommendations to Sonelgaz Enterprise concerning social networks. The whole social networks represent more than a billion people as a potential target for the companies. Thanks to research and a qualitative approach, we have identified tree valid hypothesis. The first hypothesis allows confirming that using social networks cannot be ignored by any company in its communication strategy. However, the second hypothesis demonstrates that it’s necessary to prepare a strategy that integrates social networks in the communication plan of the company. The risk of this strategy is very limited because failure on social networks is not a restraint for the enterprise, social networking is not expensive and, a bad image which could result from it is not as important in the long-term. Furthermore, the return on investment is difficult to evaluate. Finally, the last hypothesis shows that firms establish a new relation between consumers and brands thanks to the proximity allowed by social networks. After the validation of the hypothesis, we suggested some recommendations to Sonelgaz Enterprise regarding the communication through social networks. Firstly, the company must use the interactivity of social network in order to have fruitful exchanges with the community. We also recommended having a strategy to treat negative comments. The company must also suggest delivering resources to the community thanks to a community manager, in order to have a good relation with the community. Furthermore, we advised using social networks to do business intelligence. Sonelgaz Enterprise can have some creative and interactive contents with some amazing applications on Facebook for example. Finally, we recommended to the company to be not intrusive with “fans” or “followers” and to be open to all the platforms: Twitter, Facebook, Linked-In for example.

Keywords: social network, buzz, communication, consumer, return on investment, internet users, web 2.0, Facebook, Twitter, interaction

Procedia PDF Downloads 380
1146 Estimating Water Balance at Beterou Watershed, Benin Using Soil and Water Assessment Tool (SWAT) Model

Authors: Ella Sèdé Maforikan

Abstract:

Sustained water management requires quantitative information and the knowledge of spatiotemporal dynamics of hydrological system within the basin. This can be achieved through the research. Several studies have investigated both surface water and groundwater in Beterou catchment. However, there are few published papers on the application of the SWAT modeling in Beterou catchment. The objective of this study was to evaluate the performance of SWAT to simulate the water balance within the watershed. The inputs data consist of digital elevation model, land use maps, soil map, climatic data and discharge records. The model was calibrated and validated using the Sequential Uncertainty Fitting (SUFI2) approach. The calibrated started from 1989 to 2006 with four years warming up period (1985-1988); and validation was from 2007 to 2020. The goodness of the model was assessed using five indices, i.e., Nash–Sutcliffe efficiency (NSE), the ratio of the root means square error to the standard deviation of measured data (RSR), percent bias (PBIAS), the coefficient of determination (R²), and Kling Gupta efficiency (KGE). Results showed that SWAT model successfully simulated river flow in Beterou catchment with NSE = 0.79, R2 = 0.80 and KGE= 0.83 for the calibration process against validation process that provides NSE = 0.78, R2 = 0.78 and KGE= 0.85 using site-based streamflow data. The relative error (PBIAS) ranges from -12.2% to 3.1%. The parameters runoff curve number (CN2), Moist Bulk Density (SOL_BD), Base Flow Alpha Factor (ALPHA_BF), and the available water capacity of the soil layer (SOL_AWC) were the most sensitive parameter. The study provides further research with uncertainty analysis and recommendations for model improvement and provision of an efficient means to improve rainfall and discharges measurement data.

Keywords: watershed, water balance, SWAT modeling, Beterou

Procedia PDF Downloads 28
1145 Effects of Changes in LULC on Hydrological Response in Upper Indus Basin

Authors: Ahmad Ammar, Umar Khan Khattak, Muhammad Majid

Abstract:

Empirically based lumped hydrologic models have an extensive track record of use for various watershed managements and flood related studies. This study focuses on the impacts of LULC change for 10 year period on the discharge in watershed using lumped model HEC-HMS. The Indus above Tarbela region acts as a source of the main flood events in the middle and lower portions of Indus because of the amount of rainfall and topographic setting of the region. The discharge pattern of the region is influenced by the LULC associated with it. In this study the Landsat TM images were used to do LULC analysis of the watershed. Satellite daily precipitation TRMM data was used as input rainfall. The input variables for model building in HEC-HMS were then calculated based on the GIS data collected and pre-processed in HEC-GeoHMS. SCS-CN was used as transform model, SCS unit hydrograph method was used as loss model and Muskingum was used as routing model. For discharge simulation years 2000 and 2010 were taken. HEC-HMS was calibrated for the year 2000 and then validated for 2010.The performance of the model was assessed through calibration and validation process and resulted R2=0.92 during calibration and validation. Relative Bias for the years 2000 was -9% and for2010 was -14%. The result shows that in 10 years the impact of LULC change on discharge has been negligible in the study area overall. One reason is that, the proportion of built-up area in the watershed, which is the main causative factor of change in discharge, is less than 1% of the total area. However, locally, the impact of development was found significant in built up area of Mansehra city. The analysis was done on Mansehra city sub-watershed with an area of about 16 km2 and has more than 13% built up area in 2010. The results showed that with an increase of 40% built-up area in the city from 2000 to 2010 the discharge values increased about 33 percent, indicating the impact of LULC change on discharge value.

Keywords: LULC change, HEC-HMS, Indus Above Tarbela, SCS-CN

Procedia PDF Downloads 476
1144 Designing Automated Embedded Assessment to Assess Student Learning in a 3D Educational Video Game

Authors: Mehmet Oren, Susan Pedersen, Sevket C. Cetin

Abstract:

Despite the frequently criticized disadvantages of the traditional used paper and pencil assessment, it is the most frequently used method in our schools. Although assessments do an acceptable measurement, they are not capable of measuring all the aspects and the richness of learning and knowledge. Also, many assessments used in schools decontextualize the assessment from the learning, and they focus on learners’ standing on a particular topic but do not concentrate on how student learning changes over time. For these reasons, many scholars advocate that using simulations and games (S&G) as a tool for assessment has significant potentials to overcome the problems in traditionally used methods. S&G can benefit from the change in technology and provide a contextualized medium for assessment and teaching. Furthermore, S&G can serve as an instructional tool rather than a method to test students’ learning at a particular time point. To investigate the potentials of using educational games as an assessment and teaching tool, this study presents the implementation and the validation of an automated embedded assessment (AEA), which can constantly monitor student learning in the game and assess their performance without intervening their learning. The experiment was conducted on an undergraduate level engineering course (Digital Circuit Design) with 99 participant students over a period of five weeks in Spring 2016 school semester. The purpose of this research study is to examine if the proposed method of AEA is valid to assess student learning in a 3D Educational game and present the implementation steps. To address this question, this study inspects three aspects of the AEA for the validation. First, the evidence-centered design model was used to lay out the design and measurement steps of the assessment. Then, a confirmatory factor analysis was conducted to test if the assessment can measure the targeted latent constructs. Finally, the scores of the assessment were compared with an external measure (a validated test measuring student learning on digital circuit design) to evaluate the convergent validity of the assessment. The results of the confirmatory factor analysis showed that the fit of the model with three latent factors with one higher order factor was acceptable (RMSEA < 0.00, CFI =1, TLI=1.013, WRMR=0.390). All of the observed variables significantly loaded to the latent factors in the latent factor model. In the second analysis, a multiple regression analysis was used to test if the external measure significantly predicts students’ performance in the game. The results of the regression indicated the two predictors explained 36.3% of the variance (R2=.36, F(2,96)=27.42.56, p<.00). It was found that students’ posttest scores significantly predicted game performance (β = .60, p < .000). The statistical results of the analyses show that the AEA can distinctly measure three major components of the digital circuit design course. It was aimed that this study can help researchers understand how to design an AEA, and showcase an implementation by providing an example methodology to validate this type of assessment.

Keywords: educational video games, automated embedded assessment, assessment validation, game-based assessment, assessment design

Procedia PDF Downloads 397
1143 Prevalence of Cognitive Decline in Major Depressive Illness

Authors: U. B. Zubair, A. Kiyani

Abstract:

Introduction: Depressive illness predispose individuals to a lot of physical and mental health issues. Anxiety and substance use disorders have been studied widely as comorbidity. Biological symptoms also now considered part of the depressive spectrum. Cognitive abilities also decline or get affected and need to be looked into in detail in depressed patients. Objective: To determine the prevalence of cognitive decline among patients with major depressive illness and analyze the associated socio-demographic factors. Methods: 190 patients of major depressive illness were included in our study to determine the presence of cognitive decline among them. Depression was diagnosed by a consultant psychiatrist by using the ICD-10 criteria for major depressive disorder. British Columbia Cognitive Complaints Inventory (BC-CCI) was the psychometric tool used to determine the cognitive decline. Sociodemographic profile was recorded and the relationship of various factors with cognitive decline was also ascertained. Findings: 70% of the patients suffering from depression included in this study showed the presence of some degree of cognitive decline, while 30% did not show any evidence of cognitive decline when screened through BCCCI. Statistical testing revealed that the female gender was the only socio-demographic parameter linked significantly with the presence of cognitive decline. Conclusion: Decline in cognitive abilities was found in a significant number of patients suffering from major depression in our sample population. Screening for this parameter f mental function should be done in depression clinics to pick it early.

Keywords: depression, cognitive decline, prevalence, socio-demographic factors

Procedia PDF Downloads 108
1142 Evaluating Generative Neural Attention Weights-Based Chatbot on Customer Support Twitter Dataset

Authors: Sinarwati Mohamad Suhaili, Naomie Salim, Mohamad Nazim Jambli

Abstract:

Sequence-to-sequence (seq2seq) models augmented with attention mechanisms are playing an increasingly important role in automated customer service. These models, which are able to recognize complex relationships between input and output sequences, are crucial for optimizing chatbot responses. Central to these mechanisms are neural attention weights that determine the focus of the model during sequence generation. Despite their widespread use, there remains a gap in the comparative analysis of different attention weighting functions within seq2seq models, particularly in the domain of chatbots using the customer support Twitter (CST) dataset. This study addresses this gap by evaluating four distinct attention-scoring functions -dot, multiplicative/general, additive, and an extended multiplicative function with a tanh activation parameter- in neural generative seq2seq models. Utilizing the CST dataset, these models were trained and evaluated over 10 epochs with the AdamW optimizer. Evaluation criteria included validation loss and BLEU scores implemented under both greedy and beam search strategies with a beam size of k=3. Results indicate that the model with the tanh-augmented multiplicative function significantly outperforms its counterparts, achieving the lowest validation loss (1.136484) and the highest BLEU scores (0.438926 under greedy search, 0.443000 under beam search, k=3). These results emphasize the crucial influence of selecting an appropriate attention-scoring function in improving the performance of seq2seq models for chatbots. Particularly, the model that integrates tanh activation proves to be a promising approach to improve the quality of chatbots in the customer support context.

Keywords: attention weight, chatbot, encoder-decoder, neural generative attention, score function, sequence-to-sequence

Procedia PDF Downloads 45
1141 Agile Software Effort Estimation Using Regression Techniques

Authors: Mikiyas Adugna

Abstract:

Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.

Keywords: agile software development, effort estimation, elastic net regression, LASSO

Procedia PDF Downloads 20
1140 Prevalence and the Results of the Czech Nationwide Survey and Personality Traits of Adolescence Playing Computer Games

Authors: Jaroslava Sucha, Martin Dolejs, Helena Pipova, Panajotis Cakirpaloglu

Abstract:

The paper introduces the research project which is focused on evaluating the level of pathological relation towards computer or video games playing (including any games played by using a screen such as a mobile or a tablet). The study involves representative sample of the Czech adolescents between ages 11 and 19. This poster presents the psychometric indicators of the new psychologic assessment method (mean, standard deviation, reliability, validity) which will be able to detect an acceptable level of games’ playing and at the same time will detect and describe the level of gaming which might be potentially risky. The prevalence of risky computer game playing at Czech adolescents in age 11 to 19 will be mentioned. The research study also aims to describe the personality profile of the problematic players with respect to the digital games. The research area will encompass risky behaviour, aggression, the level of self-esteem, impulsivity, anxiety and depression. The contribution will introduce a new test method for the assessment of pathological playing computer games. The research will give the first screening information of playing computer games in the Czech Republic by adolescents between 11-19 years. The results clarify what relationship exists between playing computer games and selected personality characteristics (it will describe personality of the gamer, who is in the category of ‘pathological playing computer games’).

Keywords: adolescence, computer games, personality traits, risk behaviour

Procedia PDF Downloads 209