Search results for: analytical validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3655

Search results for: analytical validation

3175 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 64
3174 Stability Analysis of Tumor-Immune Fractional Order Model

Authors: Sadia Arshad, Yifa Tang, Dumitru Baleanu

Abstract:

A fractional order mathematical model is proposed that incorporate CD8+ cells, natural killer cells, cytokines and tumor cells. The tumor cells growth in the absence of an immune response is modeled by logistic law as it was the simplest form for which predictions also agreed with the experimental data. Natural Killer Cells are our first line of defense. NK cells directly kill tumor cells through several mechanisms, including the release of cytoplasmic granules containing perforin and granzyme, expression of tumor necrosis factor (TNF) family members. The effect of the NK cells on the tumor cell population is expressed with the product term. Rational form is used to describe interaction between CD8+ cells and tumor cells. A number of cytokines are produced by NKs, including tumor necrosis factor TNF, IFN, and interleukin (IL-10). Source term for cytokines is modeled by Michaelis-Menten form to indicate the saturated effects of the immune response. Stability of the equilibrium points is discussed for biologically significant values of bifurcation parameters. We studied the treatment of fractional order system by investigating analytical conditions of tumor eradication. Numerical simulations are presented to illustrate the analytical results.

Keywords: cancer model, fractional calculus, numerical simulations, stability analysis

Procedia PDF Downloads 315
3173 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 114
3172 The Prognostic Prediction Value of Positive Lymph Nodes Numbers for the Hypopharyngeal Squamous Cell Carcinoma

Authors: Wendu Pang, Yaxin Luo, Junhong Li, Yu Zhao, Danni Cheng, Yufang Rao, Minzi Mao, Ke Qiu, Yijun Dong, Fei Chen, Jun Liu, Jian Zou, Haiyang Wang, Wei Xu, Jianjun Ren

Abstract:

We aimed to compare the prognostic prediction value of positive lymph node number (PLNN) to the American Joint Committee on Cancer (AJCC) tumor, lymph node, and metastasis (TNM) staging system for patients with hypopharyngeal squamous cell carcinoma (HPSCC). A total of 826 patients with HPSCC from the Surveillance, Epidemiology, and End Results database (2004–2015) were identified and split into two independent cohorts: training (n=461) and validation (n=365). Univariate and multivariate Cox regression analyses were used to evaluate the prognostic effects of PLNN in patients with HPSCC. We further applied six Cox regression models to compare the survival predictive values of the PLNN and AJCC TNM staging system. PLNN showed a significant association with overall survival (OS) and cancer-specific survival (CSS) (P < 0.001) in both univariate and multivariable analyses, and was divided into three groups (PLNN 0, PLNN 1-5, and PLNN>5). In the training cohort, multivariate analysis revealed that the increased PLNN of HPSCC gave rise to significantly poor OS and CSS after adjusting for age, sex, tumor size, and cancer stage; this trend was also verified by the validation cohort. Additionally, the survival model incorporating a composite of PLNN and TNM classification (C-index, 0.705, 0.734) performed better than the PLNN and AJCC TNM models. PLNN can serve as a powerful survival predictor for patients with HPSCC and is a surrogate supplement for cancer staging systems.

Keywords: hypopharyngeal squamous cell carcinoma, positive lymph nodes number, prognosis, prediction models, survival predictive values

Procedia PDF Downloads 154
3171 Stress Analysis of Tubular Bonded Joints under Torsion and Hygrothermal Effects Using DQM

Authors: Mansour Mohieddin Ghomshei, Reza Shahi

Abstract:

Laminated composite tubes with adhesively bonded joints are widely used in aerospace and automotive industries as well as oil and gas industries. In this research, adhesively tubular single lap joints subjected to torsional and hygrothermal loadings are studied using the differential quadrature method (DQM). The analysis is based on the classical shell theory. At first, an approximate closed form solution is developed by omitting the lateral deflections in the connecting tubes. Using the analytical model, the circumferential displacements in tubes and the shear stresses in the interfacing adhesive layer are determined. Then, a numerical formulation is presented using DQM in which the lateral deflections are taken into account. By using the DQM formulation, the circumferential and radial displacements in tubes as well as shear and peel stresses in the adhesive layer are calculated. Results obtained from the proposed DQM solutions are compared well with those of the approximate analytical model and those of some published references. Finally using the DQM model, parametric studies are carried out to investigate the influence of various parameters such as adhesive layer thickness, torsional loading, overlap length, tubes radii, relative humidity, and temperature.

Keywords: adhesively bonded joint, differential quadrature method (DQM), hygrothermal, laminated composite tube

Procedia PDF Downloads 302
3170 An Axiomatic Model for Development of the Allocated Architecture in Systems Engineering Process

Authors: Amir Sharahi, Reza Tehrani, Ali Mollajan

Abstract:

The final step to complete the “Analytical Systems Engineering Process” is the “Allocated Architecture” in which all Functional Requirements (FRs) of an engineering system must be allocated into their corresponding Physical Components (PCs). At this step, any design for developing the system’s allocated architecture in which no clear pattern of assigning the exclusive “responsibility” of each PC for fulfilling the allocated FR(s) can be found is considered a poor design that may cause difficulties in determining the specific PC(s) which has (have) failed to satisfy a given FR successfully. The present study utilizes the Axiomatic Design method principles to mathematically address this problem and establishes an “Axiomatic Model” as a solution for reaching good alternatives for developing the allocated architecture. This study proposes a “loss Function”, as a quantitative criterion to monetarily compare non-ideal designs for developing the allocated architecture and choose the one which imposes relatively lower cost to the system’s stakeholders. For the case-study, we use the existing design of U. S. electricity marketing subsystem, based on data provided by the U.S. Energy Information Administration (EIA). The result for 2012 shows the symptoms of a poor design and ineffectiveness due to coupling among the FRs of this subsystem.

Keywords: allocated architecture, analytical systems engineering process, functional requirements (FRs), physical components (PCs), responsibility of a physical component, system’s stakeholders

Procedia PDF Downloads 408
3169 Integrative Transcriptomic Profiling of NK Cells and Monocytes: Advancing Diagnostic and Therapeutic Strategies for COVID-19

Authors: Salma Loukman, Reda Benmrid, Najat Bouchmaa, Hicham Hboub, Rachid El Fatimy, Rachid Benhida

Abstract:

In this study, it use integrated transcriptomic datasets from the GEO repository with the purpose of investigating immune dysregulation in COVID-19. Thus, in this context, we decided to be focused on NK cells and CD14+ monocytes gene expression, considering datasets GSE165461 and GSE198256, respectively. Other datasets with PBMCs, lung, olfactory, and sensory epithelium and lymph were used to provide robust validation for our results. This approach gave an integrated view of the immune responses in COVID-19, pointing out a set of potential biomarkers and therapeutic targets with special regard to standards of physiological conditions. IFI27, MKI67, CENPF, MBP, HBA2, TMEM158, THBD, HBA1, LHFPL2, SLA, and AC104564.3 were identified as key genes from our analysis that have critical biological processes related to inflammation, immune regulation, oxidative stress, and metabolic processes. Consequently, such processes are important in understanding the heterogeneous clinical manifestations of COVID-19—from acute to long-term effects now known as 'long COVID'. Subsequent validation with additional datasets consolidated these genes as robust biomarkers with an important role in the diagnosis of COVID-19 and the prediction of its severity. Moreover, their enrichment in key pathophysiological pathways presented them as potential targets for therapeutic intervention.The results provide insight into the molecular dynamics of COVID-19 caused by cells such as NK cells and other monocytes. Thus, this study constitutes a solid basis for targeted diagnostic and therapeutic development and makes relevant contributions to ongoing research efforts toward better management and mitigation of the pandemic.

Keywords: SARS-COV-2, RNA-seq, biomarkers, severity, long COVID-19, bio analysis

Procedia PDF Downloads 12
3168 Identification of Switched Reluctance Motor Parameters Using Exponential Swept-Sine Signal

Authors: Abdelmalek Ouannou, Adil Brouri, Laila Kadi, Tarik

Abstract:

Switched reluctance motor (SRM) has a major interest in a large domain as in electric vehicle driving because of its wide range of speed operation, high performances, low cost, and robustness to run under degraded conditions. The purpose of the paper is to develop a new analytical approach for modeling SRM parameters. Then, an identification scheme is proposed to obtain the SRM parameters. Since the SRM is featured by a highly nonlinear behavior, modeling these devices is difficult. Then, it is convenient to develop an accurate model describing the SRM. Furthermore, it is always operated in the magnetically saturated mode to maximize the energy transfer. Accordingly, it is shown that the SRM can be accurately described by a generalized polynomial Hammerstein model, i.e., the parallel connection of several Hammerstein models having polynomial nonlinearity. Presently an analytical identification method is developed using a chirp excitation signal. Afterward, the parameters of the obtained model have been determined using Finite Element Method analysis. Finally, in order to show the effectiveness of the proposed method, a comparison between the true and estimate models has been performed. The obtained results show that the output responses are very close.

Keywords: switched reluctance motor, swept-sine signal, generalized Hammerstein model, nonlinear system

Procedia PDF Downloads 237
3167 Effect of an Interface Defect in a Patch/Layer Joint under Dynamic Time Harmonic Load

Authors: Elisaveta Kirilova, Wilfried Becker, Jordanka Ivanova, Tatyana Petrova

Abstract:

The study is a continuation of the research on the hygrothermal piezoelectric response of a smart patch/layer joint with undesirable interface defect (gap) at dynamic time harmonic mechanical and electrical load and environmental conditions. In order to find the axial displacements, shear stress and interface debond length in a closed analytical form for different positions of the interface gap, the 1D modified shear lag analysis is used. The debond length is represented as a function of many parameters (frequency, magnitude, electric displacement, moisture and temperature, joint geometry, position of the gap along the interface, etc.). Then the Genetic algorithm (GA) is implemented to find this position of the gap along the interface at which a vanishing/minimal debond length is ensured, e.g to find the most harmless position for the safe work of the structure. The illustrative example clearly shows that analytical shear-lag solutions and GA method can be combined successfully to give an effective prognosis of interface shear stress and interface delamination in patch/layer structure at combined loading with existing defects. To show the effect of the position of the interface gap, all obtained results are given in figures and discussed.

Keywords: genetic algorithm, minimal delamination, optimal gap position, shear lag solution

Procedia PDF Downloads 301
3166 Modified Newton's Iterative Method for Solving System of Nonlinear Equations in Two Variables

Authors: Sara Mahesar, Saleem M. Chandio, Hira Soomro

Abstract:

Nonlinear system of equations in two variables is a system which contains variables of degree greater or equal to two or that comprises of the transcendental functions. Mathematical modeling of numerous physical problems occurs as a system of nonlinear equations. In applied and pure mathematics it is the main dispute to solve a system of nonlinear equations. Numerical techniques mainly used for finding the solution to problems where analytical methods are failed, which leads to the inexact solutions. To find the exact roots or solutions in case of the system of non-linear equations there does not exist any analytical technique. Various methods have been proposed to solve such systems with an improved rate of convergence and accuracy. In this paper, a new scheme is developed for solving system of non-linear equation in two variables. The iterative scheme proposed here is modified form of the conventional Newton’s Method (CN) whose order of convergence is two whereas the order of convergence of the devised technique is three. Furthermore, the detailed error and convergence analysis of the proposed method is also examined. Additionally, various numerical test problems are compared with the results of its counterpart conventional Newton’s Method (CN) which confirms the theoretic consequences of the proposed method.

Keywords: conventional Newton’s method, modified Newton’s method, order of convergence, system of nonlinear equations

Procedia PDF Downloads 257
3165 Oxidative Stability of an Iranian Ghee (Butter Fat) Versus Soybean Oil During Storage at Different Temperatures

Authors: Kooshan Nayebzadeh, Maryam Enteshari

Abstract:

In this study, the oxidative stability of soybean oil under different storage temperatures (4 and 25 ˚C) and during 6-month shelf-life was investigated by various analytical methods and headspace-liquid phase microextraction (HS-LPME) coupled to gas chromatography-mass spectrometry (GC-MS). Oxidation changes were monitored by analytical parameters consisted of acid value (AV), peroxide value (PV), p-Anisidine value (p-AV), thiobarbituric acid value (TBA), fatty acids profile, iodine value (IV) and oxidative stability index (OSI). In addition, concentrations of hexanal and heptanal as secondary volatile oxidation compounds were determined by HS-LPME/GC-MS technique. Rate of oxidation in soybean oil which stored at 25 ˚C was so higher. The AV, p-AV, and TBA were gradually increased during 6 months, while the amount of unsaturated fatty acids, IV, and OSI decreased. Other parameters included concentrations of both hexanal and heptanal, and PV exhibited increasing trend during primitive months of storage; then, at the end of third and fourth months a sudden decrement was understood for the concentrations of hexanal and heptanal and the amount of PV, simultaneously. The latter parameters increased again until the end of shelf-time. As a result, the temperature and time were effective factors in oxidative stability of soybean oil. Also intensive correlations were found for soybean oil at 4 ˚C between AV and TBA (r2=0.96), PV and p-AV (r2=0.9), IV and TBA (-r2=0.9), and for soybean oil stored at 4 ˚C between p-AV and TBA (r2=0.99).

Keywords: headspace-liquid phase microextraction, oxidation, shelf-life, soybean oil

Procedia PDF Downloads 398
3164 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)

Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira

Abstract:

Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.

Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina

Procedia PDF Downloads 212
3163 The Influence of Intellectual Capital Disclosures on Market Capitalization Growth

Authors: Nyoman Wijana, Chandra Arha

Abstract:

Disclosures of Intellectual Capital (IC) is a presentation of corporate information assets that are not recorded in the financial statements. This disclosures is very helpful because it provides inform corporate assets are intangible. In the new economic era, the company's intangible assets will determine company's competitive advantage. This study aimed to examine the effect of IC disclosures on market capitalization growth. Observational studies conducted over ten years in 2002-2011. The purpose of this study was to determine the effect for last ten years. One hundred samples of the company's largest market capitalization in 2011 traced back to last ten years. Data that used, are in 2011, 2008, 2005, and 2002 Method that’s used for acquiring the data is content analysis. The analytical method used is Ordinanary Least Square (OLS) and analysis tools are e views 7 This software using Pooled Least Square estimation parameters are specifically designed for panel data. The results of testing analysis showed inconsistent expression levels affect the growth of the market capitalization in each year of observation. The results of this study are expected to motivate the public company in Indonesia to do more voluntary IC disclosures and encourage regulators to make regulations in a comprehensive manner so that all categories of the IC must be disclosed by the company.

Keywords: IC disclosures, market capitalization growth, analytical method, OLS

Procedia PDF Downloads 340
3162 Experimental Modal Analysis of Kursuncular Minaret

Authors: Yunus Dere

Abstract:

Minarets are tower like structures where the call to prayer of Muslims is performed. They have a symbolic meaning and sacred place among Muslims. Being tall and slender, they are prone to damage under earthquakes and strong winds. Kursuncular stone minaret was built around thirty years ago in Konya/TURKEY. Its core and helical stairs are made of reinforced concrete. Its stone spire was damaged during a light earthquake. Its spire is later replaced with a light material covered with lead sheets. In this study, the natural frequencies and mode shapes of Kursuncular minaret is obtained experimentally and analytically. First an ambient vibration test is carried out using a data acquisition system with accelerometers located at four locations along the height of the minaret. The collected vibration data is evaluated by operational modal analysis techniques. For the analytical part of the study, the dimensions of the minaret are accurately measured and a detailed 3D solid finite element model of the minaret is generated. The moduli of elasticity of the stone and concrete are approximated using the compressive strengths obtained by Windsor Pin tests. Finite element modal analysis of the minaret is carried out to get the modal parameters. Experimental and analytical results are then compared and found in good agreement.

Keywords: experimental modal analysis, stone minaret, finite element modal analysis, minarets

Procedia PDF Downloads 327
3161 An Assessment of Finite Element Computations in the Structural Analysis of Diverse Coronary Stent Types: Identifying Prerequisites for Advancement

Authors: Amir Reza Heydari, Yaser Jenab

Abstract:

Coronary artery disease, a common cardiovascular disease, is attributed to the accumulation of cholesterol-based plaques in the coronary arteries, leading to atherosclerosis. This disease is associated with risk factors such as smoking, hypertension, diabetes, and elevated cholesterol levels, contributing to severe clinical consequences, including acute coronary syndromes and myocardial infarction. Treatment approaches such as from lifestyle interventions to surgical procedures like percutaneous coronary intervention and coronary artery bypass surgery. These interventions often employ stents, including bare-metal stents (BMS), drug-eluting stents (DES), and bioresorbable vascular scaffolds (BVS), each with its advantages and limitations. Computational tools have emerged as critical in optimizing stent designs and assessing their performance. The aim of this study is to provide an overview of the computational methods of studies based on the finite element (FE) method in the field of coronary stenting and discuss the potential for development and clinical application of stent devices. Additionally, the importance of assessing the ability of computational models is emphasized to represent real-world phenomena, supported by recent guidelines from the American Society of Mechanical Engineers (ASME). Validation processes proposed include comparing model performance with in vivo, ex-vivo, or in vitro data, alongside uncertainty quantification and sensitivity analysis. These methods can enhance the credibility and reliability of in silico simulations, ultimately aiding in the assessment of coronary stent designs in various clinical contexts.

Keywords: atherosclerosis, materials, restenosis, review, validation

Procedia PDF Downloads 91
3160 Characterization of the Physicochemical Properties of Raw and Calcined Kaolinitic Clays Using Analytical Techniques

Authors: Alireza Khaloo, Asghar Gholizadeh-Vayghan

Abstract:

The present work focuses on the characterization of the physicochemical properties of kaolinitic clays in both raw and calcined (i.e., dehydroxylated) states. The properties investigated included the dehydroxylation temperature, chemical composition and crystalline phases, band types, kaolinite content, vitreous phase, and reactive and unreactive silica and alumina. The thermogravimetric analysis, X-ray diffractometry and infrared spectroscopy results suggest that full dehydroxylation takes place at 639°C, converting kaolinite to reactive metakaolinite (Si₂Al₂O₇). Application of higher temperatures up to 800 °C leads to complete decarbonation of the calcite phase, and the kaolinite converts to mullite at temperatures exceeding 957 °C. Calcination at 639°C was found to cause a 50% increase in the vitreous content of kaolin. Statistically meaningful increases in the reactivity of silica, alumina, calcite and sodium carbonate in kaolin were detected as a result of such thermal treatment. Such increases were found to be 11%, 47%, 240% and 10%, respectively. The ferrite phase, however, showed a 36% decline in reactivity. The proposed approach can be used as an analytical method to determine the viability of the source of kaolinite and proper physical and chemical modifications needed to enhance its suitability for geopolymer production.

Keywords: physicochemical properties, dehydroxylation, kaolinitic clays, kaolinite content, vitreous phase, reactivity

Procedia PDF Downloads 163
3159 Evaluating the Terrace Benefits of Erosion in a Terraced-Agricultural Watershed for Sustainable Soil and Water Conservation

Authors: Sitarrine Thongpussawal, Hui Shao, Clark Gantzer

Abstract:

Terracing is a conservation practice to reduce erosion and widely used for soil and water conservation throughout the world but is relatively expensive. A modification of the Soil and Water Assessment Tool (called SWAT-Terrace or SWAT-T) explicitly aims to improve the simulation of the hydrological process of erosion from the terraces. SWAT-T simulates erosion from the terraces by separating terraces into three segments instead of evaluating the entire terrace. The objective of this work is to evaluate the terrace benefits on erosion from the Goodwater Creek Experimental Watershed (GCEW) at watershed and Hydrologic Response Unit (HRU) scales using SWAT-T. The HRU is the smallest spatial unit of the model, which lumps all similar land uses, soils, and slopes within a sub-basin. The SWAT-T model was parameterized for slope length, steepness and the empirical Universal Soil Erosion Equation support practice factor for three terrace segments. Data from 1993-2010 measured at the watershed outlet were used to evaluate the models for calibration and validation. Results of SWAT-T calibration showed good performance between measured and simulated erosion for the monthly time step, but poor performance for SWAT-T validation. This is probably because of large storms in spring 2002 that prevented planting, causing poorly simulated scheduling of actual field operations. To estimate terrace benefits on erosion, models were compared with and without terraces. Results showed that SWAT-T showed significant ~3% reduction in erosion (Pr <0.01) at the watershed scale and ~12% reduction in erosion at the HRU scale. Studies using the SWAT-T model indicated that the terraces have advantages to reduce erosion from terraced-agricultural watersheds. SWAT-T can be used in the evaluation of erosion to sustainably conserve the soil and water.

Keywords: Erosion, Modeling, Terraces, SWAT

Procedia PDF Downloads 207
3158 Solid Waste Disposal Site Selection in Thiruvananthapuram Corporation Area by Data Analysis Using GIS and Remote Sensing Tools

Authors: C. Asha Poorna, P. G. Vinod, A. R. R. Menon

Abstract:

Currently increasing population and their activities like urbanization and industrialization generating the greatest environmental, issue called Waste. And the major problem in waste management is selection of an appropriate site for waste disposal. The selection of suitable site have constrains like environmental, economical and political considerations. In this paper we discuss the strategies to be followed while selecting a site for decentralized system for solid waste disposal, using Geographic Information System (GIS), the Analytical Hierarchy Process (AHP) and the remote sensing method for Thiruvananthapuram corporation area. It is located on the west coast of India near the extreme south of the mainland. It lies on the shores of Killiyar and Karamana River. Being on the basin the waste managements must be regulated with the water body. The different criteria considered for waste disposal site selection are lithology, surface water, aquifer, groundwater, land use, contours, aspect, elevation, slope, and distance to road, distance from settlement are examined in relation to land fill site selection. Each criterion was identified and weighted by AHP score and mapped using GIS technique and suitable map is prepared by overlay analysis.

Keywords: waste disposal, solid waste management, Geographic Information System (GIS), Analytical Hierarchy Process (AHP)

Procedia PDF Downloads 397
3157 Storage System Validation Study for Raw Cocoa Beans Using Minitab® 17 and R (R-3.3.1)

Authors: Anthony Oppong Kyekyeku, Sussana Antwi-Boasiako, Emmanuel De-Graft Johnson Owusu Ansah

Abstract:

In this observational study, the performance of a known conventional storage system was tested and evaluated for fitness for its intended purpose. The system has a scope extended for the storage of dry cocoa beans. System sensitivity, reproducibility and uncertainties are not known in details. This study discusses the system performance in the context of existing literature on factors that influence the quality of cocoa beans during storage. Controlled conditions were defined precisely for the system to give reliable base line within specific established procedures. Minitab® 17 and R statistical software (R-3.3.1) were used for the statistical analyses. The approach to the storage system testing was to observe and compare through laboratory test methods the quality of the cocoa beans samples before and after storage. The samples were kept in Kilner jars and the temperature of the storage environment controlled and monitored over a period of 408 days. Standard test methods use in international trade of cocoa such as the cut test analysis, moisture determination with Aqua boy KAM III model and bean count determination were used for quality assessment. The data analysis assumed the entire population as a sample in order to establish a reliable baseline to the data collected. The study concluded a statistically significant mean value at 95% Confidence Interval (CI) for the performance data analysed before and after storage for all variables observed. Correlational graphs showed a strong positive correlation for all variables investigated with the exception of All Other Defect (AOD). The weak relationship between the before and after data for AOD had an explained variability of 51.8% with the unexplained variability attributable to the uncontrolled condition of hidden infestation before storage. The current study concluded with a high-performance criterion for the storage system.

Keywords: benchmarking performance data, cocoa beans, hidden infestation, storage system validation

Procedia PDF Downloads 174
3156 Applying Resilience Engineering to improve Safety Management in a Construction Site: Design and Validation of a Questionnaire

Authors: M. C. Pardo-Ferreira, J. C. Rubio-Romero, M. Martínez-Rojas

Abstract:

Resilience Engineering is a new paradigm of safety management that proposes to change the way of managing the safety to focus on the things that go well instead of the things that go wrong. Many complex and high-risk sectors such as air traffic control, health care, nuclear power plants, railways or emergencies, have applied this new vision of safety and have obtained very positive results. In the construction sector, safety management continues to be a problem as indicated by the statistics of occupational injuries worldwide. Therefore, it is important to improve safety management in this sector. For this reason, it is proposed to apply Resilience Engineering to the construction sector. The Construction Phase Health and Safety Plan emerges as a key element for the planning of safety management. One of the key tools of Resilience Engineering is the Resilience Assessment Grid that allows measuring the four essential abilities (respond, monitor, learn and anticipate) for resilient performance. The purpose of this paper is to develop a questionnaire based on the Resilience Assessment Grid, specifically on the ability to learn, to assess whether a Construction Phase Health and Safety Plans helps companies in a construction site to implement this ability. The research process was divided into four stages: (i) initial design of a questionnaire, (ii) validation of the content of the questionnaire, (iii) redesign of the questionnaire and (iii) application of the Delphi method. The questionnaire obtained could be used as a tool to help construction companies to evolve from Safety-I to Safety-II. In this way, companies could begin to develop the ability to learn, which will serve as a basis for the development of the other abilities necessary for resilient performance. The following steps in this research are intended to develop other questions that allow evaluating the rest of abilities for resilient performance such as monitoring, learning and anticipating.

Keywords: resilience engineering, construction sector, resilience assessment grid, construction phase health and safety plan

Procedia PDF Downloads 137
3155 Use of the SWEAT Analysis Approach to Determine the Effectiveness of a School's Implementation of Its Curriculum

Authors: Prakash Singh

Abstract:

The focus of this study is on the use of the SWEAT analysis approach to determine how effectively a school, as an organization, has implemented its curriculum. To gauge the feelings of the teaching staff, unstructured interviews were employed in this study, asking the participants for their ideas and opinions on each of the three identified aspects of the school: instructional materials, media and technology; teachers’ professional competencies; and the curriculum. This investigation was based on the five key components of the SWEAT model: strengths, weaknesses, expectations, abilities, and tensions. The findings of this exploratory study evoke the significance of the SWEAT achievement model as a tool for strategic analysis to be undertaken in any organization. The findings further affirm the usefulness of this analytical tool for human resource development. Employees have expectations, but competency gaps in their professional abilities may hinder them from fulfilling their tasks in terms of their job description. Also, tensions in the working environment can contribute to their experiences of tobephobia (fear of failure). The SWEAT analysis approach detects such shortcomings in any organization and can therefore culminate in the development of programmes to address such concerns. The strategic SWEAT analysis process can provide a clear distinction between success and failure, and between mediocrity and excellence in organizations. However, more research needs to be done on the effectiveness of the SWEAT analysis approach as a strategic analytical tool.

Keywords: SWEAT analysis, strategic analysis, tobephobia, competency gaps

Procedia PDF Downloads 507
3154 Materials and Techniques of Anonymous Egyptian Polychrome Cartonnage Mummy Mask: A Multiple Analytical Study

Authors: Hanaa A. Al-Gaoudi, Hassan Ebeid

Abstract:

The research investigates the materials and processes used in the manufacturing of an Egyptian polychrome cartonnage mummy mask with the aim of dating this object and establishing trade patterns of certain materials that were used and available at the time of ancient Egypt. This anonymous-source object was held in the basement storage of the Egyptian Museum in Cairo (EMC) and has never been on display. Furthermore, there is no information available regarding its owner, provenance, date, and even the time of its possession by the museum. Moreover, the object is in a very poor condition where almost two-thirds of the mask was bent and has never received any previous conservation treatment. This research has utilized well-established multi-analytical methods to identify the considerable diversity of materials that have been used in the manufacturing of this object. These methods include Computed Tomography Scan (CT scan) to acquire detailed pictures of the inside physical structure and condition of the bended layers. Dino-Lite portable digital microscope, scanning electron microscopy with energy dispersive X-ray spectrometer (SEM-EDX), and the non-invasive imaging technique of multispectral imaging (MSI) to obtain information about the physical characteristics and condition of the painted layers and to examine the microstructure of the materials. Portable XRF Spectrometer (PXRF) and X-Ray powder diffraction (XRD) to identify mineral phases and the bulk element composition in the gilded layer, ground, and pigments; Fourier-transform infrared (FTIR) to identify organic compounds and their molecular characterization; accelerator mass spectrometry (AMS 14C) to date the object. Preliminary results suggest that there are no human remains inside the object, and the textile support is linen fibres with tabby weave 1/1 and these fibres are in a very bad condition. Several pigments have been identified, such as Egyptian blue, Magnetite, Egyptian green frit, Hematite, Calcite, and Cinnabar; moreover, the gilded layers are pure gold and the binding media in the pigments is Arabic gum and animal glue in the textile support layer.

Keywords: analytical methods, Egyptian museum, mummy mask, pigments, textile

Procedia PDF Downloads 126
3153 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling

Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed

Abstract:

The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.

Keywords: streamflow, neural network, optimisation, algorithm

Procedia PDF Downloads 152
3152 Investigation of the Litho-Structure of Ilesa Using High Resolution Aeromagnetic Data

Authors: Oladejo Olagoke Peter, Adagunodo T. A., Ogunkoya C. O.

Abstract:

The research investigated the arrangement of some geological features under Ilesa employing aeromagnetic data. The obtained data was subjected to various data filtering and processing techniques, which are Total Horizontal Derivative (THD), Depth Continuation and Analytical Signal Amplitude using Geosoft Oasis Montaj 6.4.2 software. The Reduced to the Equator –Total Magnetic Intensity (TRE-TMI) outcomes reveal significant magnetic anomalies, with high magnitude (55.1 to 155 nT) predominantly at the Northwest half of the area. Intermediate magnetic susceptibility, ranging between 6.0 to 55.1 nT, dominates the eastern part, separated by depressions and uplifts. The southern part of the area exhibits a magnetic field of low intensity, ranging from -76.6 to 6.0 nT. The lineaments exhibit varying lengths ranging from 2.5 and 16.0 km. Analyzing the Rose Diagram and the analytical signal amplitude indicates structural styles mainly of E-W and NE-SW orientations, particularly evident in the western, SW and NE regions with an amplitude of 0.0318nT/m. The identified faults in the area demonstrate orientations of NNW-SSE, NNE-SSW and WNW-ESE, situated at depths ranging from 500 to 750 m. Considering the divergence magnetic susceptibility, structural style or orientation of the lineaments, identified fault and their depth, these lithological features could serve as a valuable foundation for assessing ground motion, particularly in the presence of sufficient seismic energy.

Keywords: lineament, aeromagnetic, anomaly, fault, magnetic

Procedia PDF Downloads 76
3151 Universality as Opportunity Domain behind the Threats and Challenges of Natural Disasters

Authors: Kunto Wibowo Agung Prodjonoto

Abstract:

Occasionally, opportunities occur not due to chances but threats. This, however, is often not realized because a greater threat is perceived to be anything that threatens, endangers, or harms, resulting in bad impacts that are also part of the risk and consequence. As a result, more focus tends to direct towards the bad impacts. Risk, in this case, shall be seen rather as something challenging, which can turn to be an opportunity to tackle an obstacle. Therefore, it does not seem exaggerating if later, risk can be considered as a challenge that presents an opportunity. So as in the context of the threat of natural disasters which gives an idea that opportunities exist. Nature referred to in a fashion as 'natural disasters' captured an expression to picture the 'threats' aspect, which instructively implying a chance of opportunity. This is quite logical, as SWOT (strengths, weaknesses, opportunities, threats) analysis can evaluate the situation at hand related to the analysis of various factors in formulating strategies to deal with natural disaster situations. The analytical method created by Albert Humphrey is indeed not an analytical tool to provide solutions, but certainly 'opportunities and challenges' are discussed therein on a vertical line, where opportunities are posited on the positive axis, and threats are posed on the negative axis. Observing this dynamism, the challenges and threats of disasters are having opportunity relevance to moralizing opportunities, that by quality poses universalism populist characteristics, universalism characteristics, and regional characteristics. Here, universalism appears as an opportunity domain underneath the threats and challenges of natural disasters.

Keywords: universality, opportunities, threats, challenges of natural disasters

Procedia PDF Downloads 151
3150 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods

Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo

Abstract:

The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.

Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines

Procedia PDF Downloads 621
3149 Developing a Knowledge-Based Lean Six Sigma Model to Improve Healthcare Leadership Performance

Authors: Yousuf N. Al Khamisi, Eduardo M. Hernandez, Khurshid M. Khan

Abstract:

Purpose: This paper presents a model of a Knowledge-Based (KB) using Lean Six Sigma (L6σ) principles to enhance the performance of healthcare leadership. Design/methodology/approach: Using L6σ principles to enhance healthcare leaders’ performance needs a pre-assessment of the healthcare organisation’s capabilities. The model will be developed using a rule-based approach of KB system. Thus, KB system embeds Gauging Absence of Pre-requisite (GAP) for benchmarking and Analytical Hierarchy Process (AHP) for prioritization. A comprehensive literature review will be covered for the main contents of the model with a typical output of GAP analysis and AHP. Findings: The proposed KB system benchmarks the current position of healthcare leadership with the ideal benchmark one (resulting from extensive evaluation by the KB/GAP/AHP system of international leadership concepts in healthcare environments). Research limitations/implications: Future work includes validating the implementation model in healthcare environments around the world. Originality/value: This paper presents a novel application of a hybrid KB combines of GAP and AHP methodology. It implements L6σ principles to enhance healthcare performance. This approach assists healthcare leaders’ decision making to reach performance improvement against a best practice benchmark.

Keywords: Lean Six Sigma (L6σ), Knowledge-Based System (KBS), healthcare leadership, Gauge Absence Prerequisites (GAP), Analytical Hierarchy Process (AHP)

Procedia PDF Downloads 166
3148 Performance Comparison and Visualization of COMSOL Multiphysics, Matlab, and Fortran for Predicting the Reservoir Pressure on Oil Production in a Multiple Leases Reservoir with Boundary Element Method

Authors: N. Alias, W. Z. W. Muhammad, M. N. M. Ibrahim, M. Mohamed, H. F. S. Saipol, U. N. Z. Ariffin, N. A. Zakaria, M. S. Z. Suardi

Abstract:

This paper presents the performance comparison of some computation software for solving the boundary element method (BEM). BEM formulation is the numerical technique and high potential for solving the advance mathematical modeling to predict the production of oil well in arbitrarily shaped based on multiple leases reservoir. The limitation of data validation for ensuring that a program meets the accuracy of the mathematical modeling is considered as the research motivation of this paper. Thus, based on this limitation, there are three steps involved to validate the accuracy of the oil production simulation process. In the first step, identify the mathematical modeling based on partial differential equation (PDE) with Poisson-elliptic type to perform the BEM discretization. In the second step, implement the simulation of the 2D BEM discretization using COMSOL Multiphysic and MATLAB programming languages. In the last step, analyze the numerical performance indicators for both programming languages by using the validation of Fortran programming. The performance comparisons of numerical analysis are investigated in terms of percentage error, comparison graph and 2D visualization of pressure on oil production of multiple leases reservoir. According to the performance comparison, the structured programming in Fortran programming is the alternative software for implementing the accurate numerical simulation of BEM. As a conclusion, high-level language for numerical computation and numerical performance evaluation are satisfied to prove that Fortran is well suited for capturing the visualization of the production of oil well in arbitrarily shaped.

Keywords: performance comparison, 2D visualization, COMSOL multiphysic, MATLAB, Fortran, modelling and simulation, boundary element method, reservoir pressure

Procedia PDF Downloads 491
3147 A Mathematical Analysis of Behavioural Epidemiology: Drugs Users Transmission Dynamics Based on Level Education for Susceptible Population

Authors: Firman Riyudha, Endrik Mifta Shaiful

Abstract:

The spread of drug users is one kind of behavioral epidemiology that becomes a threat to every country in the world. This problem caused various crisis simultaneously, including financial or economic crisis, social, health, until human crisis. Most drug users are teenagers at school age. A new deterministic model would be constructed to determine the dynamics of the spread of drug users by considering level of education in a susceptible population. Based on the analytical model, two equilibria points were obtained; there were E₀ (zero user) and E₁ (endemic equilibrium). Existence of equilibrium and local stability of equilibria depended on the Basic Reproduction Ratio (R₀). This parameter was defined as the expected rate of secondary prevalence and primary prevalence in virgin population along spreading primary prevalence. The zero-victim equilibrium would be locally asymptotically stable if R₀ < 1 while if R₀ > 1 the endemic equilibrium would be locally asymptotically stable. The result showed that R₀ was proportional to the rate of interaction of each susceptible population based on educational level with the users' population. It is concluded that there was a need to be given a control in interaction, so that drug users population could be minimized. Numerical simulations were also provided to support analytical results.

Keywords: drugs users, level education, mathematical model, stability

Procedia PDF Downloads 475
3146 A Stochastic Analytic Hierarchy Process Based Weighting Model for Sustainability Measurement in an Organization

Authors: Faramarz Khosravi, Gokhan Izbirak

Abstract:

A weighted statistical stochastic based Analytical Hierarchy Process (AHP) model for modeling the potential barriers and enablers of sustainability for measuring and assessing the sustainability level is proposed. For context-dependent potential barriers and enablers, the proposed model takes the basis of the properties of the variables describing the sustainability functions and was developed into a realistic analytical model for the sustainable behavior of an organization. This thus serves as a means for measuring the sustainability of the organization. The main focus of this paper was the application of the AHP tool in a statistically-based model for measuring sustainability. Hence a strong weighted stochastic AHP based procedure was achieved. A case study scenario of a widely reported major Canadian electric utility was adopted to demonstrate the applicability of the developed model and comparatively examined its results with those of an equal-weighted model method. Variations in the sustainability of a company, as fluctuations, were figured out during the time. In the results obtained, sustainability index for successive years changed form 73.12%, 79.02%, 74.31%, 76.65%, 80.49%, 79.81%, 79.83% to more exact values 73.32%, 77.72%, 76.76%, 79.41%, 81.93%, 79.72%, and 80,45% according to priorities of factors that have found by expert views, respectively. By obtaining relatively necessary informative measurement indicators, the model can practically and effectively evaluate the sustainability extent of any organization and also to determine fluctuations in the organization over time.

Keywords: AHP, sustainability fluctuation, environmental indicators, performance measurement

Procedia PDF Downloads 121