Search results for: ontology validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1502

Search results for: ontology validation

212 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel

Authors: Hamed Kalhori, Lin Ye

Abstract:

In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.

Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction

Procedia PDF Downloads 515
211 Simulation Study on Polymer Flooding with Thermal Degradation in Elevated-Temperature Reservoirs

Authors: Lin Zhao, Hanqiao Jiang, Junjian Li

Abstract:

Polymers injected into elevated-temperature reservoirs inevitably suffer from thermal degradation, resulting in severe viscosity loss and poor flooding performance. However, for polymer flooding in such reservoirs, present simulators fail to provide accurate results for lack of description on thermal degradation. In light of this, the objectives of this paper are to provide a simulation model for polymer flooding with thermal degradation and study the effect of thermal degradation on polymer flooding in elevated-temperature reservoirs. Firstly, a thermal degradation experiment was conducted to obtain the degradation law of polymer concentration and viscosity. Different types of polymers degraded in the Thermo tank with elevated temperatures. Afterward, based on the obtained law, a streamline-assistant model was proposed to simulate the degradation process under in-situ flow conditions. Model validation was performed with field data from a well group of an offshore oilfield. Finally, the effect of thermal degradation on polymer flooding was studied using the proposed model. Experimental results showed that the polymer concentration remained unchanged, while the viscosity degraded exponentially with time after degradation. The polymer viscosity was functionally dependent on the polymer degradation time (PDT), which represented the elapsed time started from the polymer particle injection. Tracing the real flow path of polymer particle was required. Therefore, the presented simulation model was streamline-assistant. Equation of PDT vs. time of flight (TOF) along streamline was built by the law of polymer particle transport. Based on the field polymer sample and dynamic data, the new model proved its accuracy. Study of degradation effect on polymer flooding indicated: (1) the viscosity loss increased with TOF exponentially in the main body of polymer-slug and remained constant in the slug front; (2) the responding time of polymer flooding was delayed, but the effective time was prolonged; (3) the breakthrough of subsequent water was eased; (4) the capacity of polymer adjusting injection profile was diminished; (5) the incremental recovery was reduced significantly. In general, the effect of thermal degradation on polymer flooding performance was rather negative. This paper provides a more comprehensive insight into polymer thermal degradation in both the physical process and field application. The proposed simulation model offers an effective means for simulating the polymer flooding process with thermal degradation. The negative effect of thermal degradation suggests that the polymer thermal stability should be given full consideration when designing polymer flooding project in elevated-temperature reservoirs.

Keywords: polymer flooding, elevated-temperature reservoir, thermal degradation, numerical simulation

Procedia PDF Downloads 113
210 Assessment of the Effect of Ethanolic Leaf Extract of Annona squamosa L. on Den Induced Hepatocellular Carcinoma in Experimental Animals

Authors: Vanitha Varadharaj, Vijalakshmi Krishnamurthy

Abstract:

Annona squamosa Linn, commonly known as Sugar apple, belonging to the family Annonaceae, is said to show varied medicinal effects, including insecticide, antiovulatory and abortifacient. The alkaloid and flavonoids present in Annona squamosa leaf has proved to have antioxidant activity. The present work has been planned to investigate the effect of ethanolic leaf extract of Annona squamosa leaf on Den Induced wistar albino rats. The study was carried out to analyze the biochemical Parmeters like Total Proteins, Bilirubin, Enzymatic and Non –Enzymatic enzymes, Marker enzymes and Tumor markers in serum and also the histopathological studies in liver is carried out in control and DEN induced rats. Supplementation of ELAS (Ethanolic Leaf Extract Of Annona squamosa) reduced the liver weight and also reduced the tumour incidence. Chemoprevention group showed near normal values of bilirubin when compared with the control rats. Total protein was decreased in the cancer bearing group and on treatment with the extract the levels of protein were restored. Both in pre and post treatment group, the activities of enzymatic antioxidants such as superoxide dismutase, catalase, and Glutathione peroxidase were increased but in pre treated animals it was more effective than post treated animals. The non- enzymatic antioxidants such as vitamin C and vitamin E were brought back to normal level significantly in post and pre treated animals. Activities of marker enzymes such as SGOT, SGPT, ALP, γ GT were significantly elevated in the serum of cancer animals and the values returned to normal after treatment with the extract suggesting the hepato protective effect of the extract. Lipid peroxide was found to be elevated in the cancer induced group. This condition was brought back to the normal in the pre and post treated animals with ELAS. Histological examination also confirmed the anti- carcinogenic potential of ELAS, Cancer induced groups had a triple fold increase in their AFP values when compared to other groups. DEN treatment increased the level of AFP expression while ELAS partially counteracted the effect of it. So the scientific validation obtained from this study may pave way to many budding scientists to find new drugs from Annona squamosa for various ailments.

Keywords: annona squamosa, biochemical parmeters, cancer, leaf extract

Procedia PDF Downloads 309
209 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 151
208 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 133
207 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 123
206 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 147
205 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging

Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa

Abstract:

Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.

Keywords: breast, machine learning, MRI, radiomics

Procedia PDF Downloads 252
204 Optimizing Oil Production through 30-Inch Pipeline in Abu-Attifel Field

Authors: Ahmed Belgasem, Walid Ben Hussin, Emad Krekshi, Jamal Hashad

Abstract:

Waxy crude oil, characterized by its high paraffin wax content, poses significant challenges in the oil & gas industry due to its increased viscosity and semi-solid state at reduced temperatures. The wax formation process, which includes precipitation, crystallization, and deposition, becomes problematic when crude oil temperatures fall below the wax appearance temperature (WAT) or cloud point. Addressing these issues, this paper introduces a technical solution designed to mitigate the wax appearance and enhance the oil production process in Abu-Attifil Field via a 30-inch crude oil pipeline. A comprehensive flow assurance study validates the feasibility and performance of this solution across various production rates, temperatures, and operational scenarios. The study's findings indicate that maintaining the crude oil's temperature above a minimum threshold of 63°C is achievable through the strategic placement of two heating stations along the pipeline route. This approach effectively prevents wax deposition, gelling, and subsequent mobility complications, thereby bolstering the overall efficiency, reliability, safety, and economic viability of the production process. Moreover, this solution significantly curtails the environmental repercussions traditionally associated with wax deposition, which can accumulate up to 7,500kg. The research methodology involves a comprehensive flow assurance study to validate the feasibility and performance of the proposed solution. The study considers various production rates, temperatures, and operational scenarios. It includes crude oil analysis to determine the wax appearance temperature (WAT), as well as the evaluation and comparison of operating options for the heating stations. The study's findings indicate that the proposed solution effectively prevents wax deposition, gelling, and subsequent mobility complications. By maintaining the crude oil's temperature above the specified threshold, the solution improves the overall efficiency, reliability, safety, and economic viability of the oil production process. Additionally, the solution contributes to reducing environmental repercussions associated with wax deposition. The research conclusion presents a technical solution that optimizes oil production in the Abu-Attifil Field by addressing wax formation problems through the strategic placement of two heating stations. The solution effectively prevents wax deposition, improves overall operational efficiency, and contributes to environmental sustainability. Further research is suggested for field data validation and cost-benefit analysis exploration.

Keywords: oil production, wax depositions, solar cells, heating stations

Procedia PDF Downloads 48
203 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale

Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize

Abstract:

Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.

Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy

Procedia PDF Downloads 66
202 Evaluation of the Anti Ulcer Activity of Ethyl Acetate Fraction of Methanol Leaf Extract of Clerodendrum Capitatum

Authors: M. N. Ofokansi, Onyemelukwe Chisom, Amauche Chukwuemeka, Ezema Onyinye

Abstract:

The leaves of Clerodendrumcapitatum(Lamiaceae) is mostly used in the treatment of gastric ulcer in Nigerian folk medicine. The aim of this study was to evaluate the antiulcer activity of its crude methanol leaf extract and its ethyl acetate fraction in white albino rats. The effect of crude methanol leaf extract and its ethyl acetate fraction(250mg/kg, 500mg/kg) was evaluated using an absolute ethanol induced ulcer model. Crude methanol leaf extract and the ethyl acetate fraction was treated with distilled water and 6% Tween 80, respectively. crude methanol leaf extract was further investigated using a pylorus ligation induced ulcer model. Omeprazole was used as the standard treatment. Four groups of five albino rats of either sex were used. Parameters such as mean ulcer index and percentage ulcer protection were assessed in the ethanol-induced ulcer model, while the gastric volume, pH, and total acidity were assessed in the pyloric ligation induced ulcer model. Crude methanol leaf extract of Clerodendrumcapitatum(500mg/kg) showed a very highly significant reduction in mean ulcer index(p<0.001) in the absolute ethanol-induced model. ethyl acetate fraction of crude methanol leaf extract of Clerodendrumcapitatum(250mg/kg,500mg/kg) showed a very highly significant dose-dependent reduction in mean ulcer indices (p<0.001) in the absolute ethanol-induced model. The mean ulcer indices (1.6,2.2) with dose concentration (250mg/kg, 500mg/kg) of ethyl acetate fraction increased with ulcer protection (82.85%,76.42%) respectively when compared to the control group in the absolute ethanol-induced ulcer model. Crude methanol leaf extract of Clerodendrumcapitatum(250mg/kg, 500mg/kg) treated animals showed a highly significant dose-dependent reduction in mean ulcer index(p<0.01) with an increase in ulcer protection (56.77%,63.22%) respectively in pyloric ligated induced, ulcer model. Gastric parameters such as volume of gastric juice, pH, and total acidity were of no significance in the different doses of the crude methanol leaf extract when compared to the control group. The phytochemical investigation showed that the crude methanol leaf extracts Possess Saponins and Flavonoids while its ethyl acetate fraction possess only Flavonoids. The results of the study indicate that the crude methanol leaf extract and its ethyl acetate fraction is effective and has gastro protective and ulcer healing capacity. Ethyl acetate fraction is more potent than crude methanol leaf extract against ethanol-induced This result provides scientific evidence as a validation for its folkloric use in the treatment of gastric ulcer.

Keywords: gastroprotective, herbal medicine, anti-ulcer, pharmacology

Procedia PDF Downloads 137
201 Body Fluids Identification by Raman Spectroscopy and Matrix-Assisted Laser Desorption/Ionization Time-of-Flight Mass Spectrometry

Authors: Huixia Shi, Can Hu, Jun Zhu, Hongling Guo, Haiyan Li, Hongyan Du

Abstract:

The identification of human body fluids during forensic investigations is a critical step to determine key details, and present strong evidence to testify criminal in a case. With the popularity of DNA and improved detection technology, the potential question must be revolved that whether the suspect’s DNA derived from saliva or semen, menstrual or peripheral blood, how to identify the red substance or aged blood traces on the spot is blood; How to determine who contribute the right one in mixed stains. In recent years, molecular approaches have been developing increasingly on mRNA, miRNA, DNA methylation and microbial markers, but appear expensive, time-consuming, and destructive disadvantages. Physicochemical methods are utilized frequently such us scanning electron microscopy/energy spectroscopy and X-ray fluorescence and so on, but results only showing one or two characteristics of body fluid itself and that out of working in unknown or mixed body fluid stains. This paper focuses on using chemistry methods Raman spectroscopy and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry to discriminate species of peripheral blood, menstrual blood, semen, saliva, vaginal secretions, urine or sweat. Firstly, non-destructive, confirmatory, convenient and fast Raman spectroscopy method combined with more accurate matrix-assisted laser desorption/ionization time-of-flight mass spectrometry method can totally distinguish one from other body fluids. Secondly, 11 spectral signatures and specific metabolic molecules have been obtained by analysis results after 70 samples detected. Thirdly, Raman results showed peripheral and menstrual blood, saliva and vaginal have highly similar spectroscopic features. Advanced statistical analysis of the multiple Raman spectra must be requested to classify one to another. On the other hand, it seems that the lactic acid can differentiate peripheral and menstrual blood detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, but that is not a specific metabolic molecule, more sensitivity ones will be analyzed in a forward study. These results demonstrate the great potential of the developed chemistry methods for forensic applications, although more work is needed for method validation.

Keywords: body fluids, identification, Raman spectroscopy, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry

Procedia PDF Downloads 107
200 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong

Abstract:

This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.

Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 214
199 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 197
198 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 262
197 Quantifying the Aspect of ‘Imagining’ in the Map of Dialogical inquiry

Authors: Chua Si Wen Alicia, Marcus Goh Tian Xi, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee

Abstract:

In a world full of rapid changes, people often need a set of skills to help them navigate an ever-changing workscape. These skills, often known as “future-oriented skills,” include learning to learn, critical thinking, understanding multiple perspectives, and knowledge creation. Future-oriented skills are typically assumed to be domain-general, applicable to multiple domains, and can be cultivated through a learning approach called Dialogical Inquiry. Dialogical Inquiry is known for its benefits of making sense of multiple perspectives, encouraging critical thinking, and developing learner’s capability to learn. However, it currently exists as a quantitative tool, which makes it hard to track and compare learning processes over time. With these concerns, the present research aimed to develop and validate a quantitative tool for the Map of Dialogical Inquiry, focusing Imagining aspect of learning. The Imagining aspect four dimensions: 1) speculative/ look for alternatives, 2) risk taking/ break rules, 3) create/ design, and 4) vision/ imagine. To do so, an exploratory literature review was conducted to better understand the dimensions of Imagining. This included deep-diving into the history of the creation of the Map of Dialogical Inquiry and a review on how “Imagining” has been conceptually defined in the field of social psychology, education, and beyond. Then, we synthesised and validated scales. These scales measured the dimension of Imagination and related concepts like creativity, divergent thinking regulatory focus, and instrumental risk. Thereafter, items were adapted from the aforementioned procured scales to form items that would contribute to the preliminary version of the Imagining Scale. For scale validation, 250 participants were recruited. A Confirmatory Factor Analysis (CFA) sought to establish dimensionality of the Imagining Scale with an iterative procedure in item removal. Reliability and validity of the scale’s dimensions were sought through measurements of Cronbach’s alpha, convergent validity, and discriminant validity. While CFA found that the distinction of Imagining’s four dimensions could not be validated, the scale was able to establish high reliability with a Cronbach alpha of .96. In addition, the convergent validity of the Imagining scale was established. A lack of strong discriminant validity may point to overlaps with other components of the Dialogical Map as a measure of learning. Thus, a holistic approach to forming the tool – encompassing all eight different components may be preferable.

Keywords: learning, education, imagining, pedagogy, dialogical teaching

Procedia PDF Downloads 68
196 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method

Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare

Abstract:

The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.

Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test

Procedia PDF Downloads 99
195 The Psychometric Properties of the Team Climate Inventory Scale: A Validation Study in Jordan’s Collectivist Society

Authors: Suhair Mereish

Abstract:

This research is aimed at examining the climate for innovation in organisations with the aim of validating the psychometric properties of the Team Climate Inventory (TCI -14) for Jordan’s collectivist society. The innovativeness of teams may be improved or obstructed by the climate within the team. Further, personal factors are considered an important element that influences the climate for innovation. Accordingly, measuring the employees' personality traits using the Big Five Inventory (BFI-44) could provide insights that aid in understanding how to improve innovation. Thus, studying the climate for innovation and its associations with personality traits is valuable, considering the insights it could offer on employee performance, job satisfaction, and well-being. Essentially, the Team Climate Inventory instrument has never been tested in Jordan’s collectivist society. Accordingly, in order to address the existing gap in the literature as a whole and, more specifically, in Jordan, it is essential to investigate its factorial structure and reliability in this particular context. It is also important to explore whether the factorial structure of the Team Climate Inventory in Jordan’s collectivist society demonstrates a similar or different structure to what has been found in individualistic ones. Lastly, examining if there are associations between the Team Climate Inventory and personality traits of Jordanian employees is pivotal. The quantitative study was carried out among Jordanian employees employed in two of the top 20 companies in Jordan, a shipping and logistics company (N=473) and a telecommunications company (N=219). To generalise the findings, this was followed by collecting data from the general population of this country (N=399). The participants completed the Team Climate Inventory. Confirmatory factor analyses and reliability tests were conducted to confirm the factorial structure, validity, and reliability of the inventory. Findings presented that the four-factor structure of the Team Climate Inventory in Jordan revealed a similar structure to the ones in Western culture. The four-factor structure has been confirmed with good fit indices and reliability values. Moreover, for climate for innovation, regression analysis identified agreeableness (positive) and neuroticism (negative) from the Big Five Inventory as significant predictors. This study will contribute to knowledge in several ways. First, by examining the reliability and factorial structure in a Jordanian collectivist context rather than a Western individualistic one. Second, by comparing the Team Climate Inventory structure in Jordan with findings for the Team Climate Inventory from Western individualistic societies. Third, by studying its relationships with personality traits in that country. Furthermore, findings from this study will assist practitioners in the field of organisational psychology and development to improve the climate for innovation for employees working in organisations in Jordan. It is also expected that the results of this research will provide recommendations to professionals in the business psychology sector regarding the characteristics of employees who hold positive and negative perceptions of the workplace climate.

Keywords: big five inventory, climate for innovation, collectivism, individualism, Jordan, team climate inventory

Procedia PDF Downloads 36
194 An Experimental Study on the Coupled Heat Source and Heat Sink Effects on Solid Rockets

Authors: Vinayak Malhotra, Samanyu Raina, Ajinkya Vajurkar

Abstract:

Enhancing the rocket efficiency by controlling the external factors in solid rockets motors has been an active area of research for most of the terrestrial and extra-terrestrial system operations. Appreciable work has been done, but the complexity of the problem has prevented thorough understanding due to heterogenous heat and mass transfer. On record, severe issues have surfaced amounting to irreplaceable loss of mankind, instruments, facilities, and huge amount of money being invested every year. The coupled effect of an external heat source and external heat sink is an aspect yet to be articulated in combustion. Better understanding of this coupled phenomenon will induce higher safety standards, efficient missions, reduced hazard risks, with better designing, validation, and testing. The experiment will help in understanding the coupled effect of an external heat sink and heat source on the burning process, contributing in better combustion and fire safety, which are very important for efficient and safer rocket flights and space missions. Safety is the most prevalent issue in rockets, which assisted by poor combustion efficiency, emphasizes research efforts to evolve superior rockets. This signifies real, engineering, scientific, practical, systems and applications. One potential application is Solid Rocket Motors (S.R.M). The study may help in: (i) Understanding the effect on efficiency of core engines due to the primary boosters if considered as source, (ii) Choosing suitable heat sink materials for space missions so as to vary the efficiency of the solid rocket depending on the mission, (iii) Giving an idea about how the preheating of the successive stage due to previous stage acting as a source may affect the mission. The present work governs the temperature (resultant) and thus the heat transfer which is expected to be non-linear because of heterogeneous heat and mass transfer. The study will deepen the understanding of controlled inter-energy conversions and the coupled effect of external source/sink(s) surrounding the burning fuel eventually leading to better combustion thus, better propulsion. The work is motivated by the need to have enhanced fire safety and better rocket efficiency. The specific objective of the work is to understand the coupled effect of external heat source and sink on propellant burning and to investigate the role of key controlling parameters. Results as of now indicate that there exists a singularity in the coupled effect. The dominance of the external heat sink and heat source decides the relative rocket flight in Solid Rocket Motors (S.R.M).

Keywords: coupled effect, heat transfer, sink, solid rocket motors, source

Procedia PDF Downloads 199
193 Pre-Implementation of Total Body Irradiation Using Volumetric Modulated Arc Therapy: Full Body Anthropomorphic Phantom Development

Authors: Susana Gonçalves, Joana Lencart, Anabela Gregório Dias

Abstract:

Introduction: In combination with chemotherapy, Total Body Irradiation (TBI) is most used as part of the conditioning regimen prior to allogeneic hematopoietic stem cell transplantation. Conventional TBI techniques have a long application time but non-conformality of beam-application with the inability to individually spare organs at risk. Our institution’s intention is to start using Volumetric Modulated Arc Therapy (VMAT) techniques to increase homogeneity of delivered radiation. As a first approach, a dosimetric plan was performed on a computed tomography (CT) scan of a Rando Alderson antropomorfic phantom (head and torso), using a set of six arcs distributed along the phantom. However, a full body anthropomorphic phantom is essential to carry out technique validation and implementation. Our aim is to define the physical and chemical characteristics and the ideal manufacturing procedure of upper and lower limbs to our anthropomorphic phantom, for later validate TBI using VMAT. Materials and Methods: To study the better fit between our phantom and limbs, a CT scan of Rando Alderson anthropomorphic phantom was acquired. CT was performed on GE Healthcare equipment (model Optima CT580 W), with slice thickness of 2.5 mm. This CT was also used to access the electronic density of soft tissue and bone through Hounsfield units (HU) analysis. Results: CT images were analyzed and measures were made for the ideal upper and lower limbs. Upper limbs should be build under the following measures: 43cm length and 7cm diameter (next to the shoulder section). Lower limbs should be build under the following measures: 79cm length and 16.5cm diameter (next to the thigh section). As expected, soft tissue and bone have very different electronic density. This is important to choose and analyze different materials to better represent soft tissue and bone characteristics. The approximate HU values of the soft tissue and for bone shall be 35HU and 250HU, respectively. Conclusion: At the moment, several compounds are being developed based on different types of resins and additives in order to be able to control and mimic the various constituent densities of the tissues. Concurrently, several manufacturing techniques are being explored to make it possible to produce the upper and lower limbs in a simple and non-expensive way, in order to finally carry out a systematic and appropriate study of the total body irradiation. This preliminary study was a good starting point to demonstrate the feasibility of TBI with VMAT.

Keywords: TBI, VMAT, anthropomorphic phantom, tissue equivalent materials

Procedia PDF Downloads 58
192 Development of a Multi-Variate Model for Matching Plant Nitrogen Requirements with Supply for Reducing Losses in Dairy Systems

Authors: Iris Vogeler, Rogerio Cichota, Armin Werner

Abstract:

Dairy farms are under pressure to increase productivity while reducing environmental impacts. Effective fertiliser management practices are critical to achieve this. Determination of optimum nitrogen (N) fertilisation rates which maximise pasture growth and minimise N losses is challenging due to variability in plant requirements and likely near-future supply of N by the soil. Remote sensing can be used for mapping N nutrition status of plants and to rapidly assess the spatial variability within a field. An algorithm is, however, lacking which relates the N status of the plants to the expected yield response to additions of N. The aim of this simulation study was to develop a multi-variate model for determining N fertilisation rate for a target percentage of the maximum achievable yield based on the pasture N concentration (ii) use of an algorithm for guiding fertilisation rates, and (iii) evaluation of the model regarding pasture yield and N losses, including N leaching, denitrification and volatilisation. A simulation study was carried out using the Agricultural Production Systems Simulator (APSIM). The simulations were done for an irrigated ryegrass pasture in the Canterbury region of New Zealand. A multi-variate model was developed and used to determine monthly required N fertilisation rates based on pasture N content prior to fertilisation and targets of 50, 75, 90 and 100% of the potential monthly yield. These monthly optimised fertilisation rules were evaluated by running APSIM for a ten-year period to provide yield and N loss estimates from both nonurine and urine affected areas. Comparison with typical fertilisation rates of 150 and 400 kg N/ha/year was also done. Assessment of pasture yield and leaching from fertiliser and urine patches indicated a large reduction in N losses when N fertilisation rates were controlled by the multi-variate model. However, the reduction in leaching losses was much smaller when taking into account the effects of urine patches. The proposed approach based on biophysical modelling to develop a multi-variate model for determining optimum N fertilisation rates dependent on pasture N content is very promising. Further analysis, under different environmental conditions and validation is required before the approach can be used to help adjust fertiliser management practices to temporal and spatial N demand based on the nitrogen status of the pasture.

Keywords: APSIM modelling, optimum N fertilization rate, pasture N content, ryegrass pasture, three dimensional surface response function.

Procedia PDF Downloads 109
191 Development of a Reduced Multicomponent Jet Fuel Surrogate for Computational Fluid Dynamics Application

Authors: Muhammad Zaman Shakir, Mingfa Yao, Zohaib Iqbal

Abstract:

This study proposed four Jet fuel surrogate (S1, S2 S3, and 4) with careful selection of seven large hydrocarbon fuel components, ranging from C₉-C₁₆ of higher molecular weight and higher boiling point, adapting the standard molecular distribution size of the actual jet fuel. The surrogate was composed of seven components, including n-propyl cyclohexane (C₉H₁₈), n- propylbenzene (C₉H₁₂), n-undecane (C₁₁H₂₄), n- dodecane (C₁₂H₂₆), n-tetradecane (C₁₄H₃₀), n-hexadecane (C₁₆H₃₄) and iso-cetane (iC₁₆H₃₄). The skeletal jet fuel surrogate reaction mechanism was developed by two approaches, firstly based on a decoupling methodology by describing the C₄ -C₁₆ skeletal mechanism for the oxidation of heavy hydrocarbons and a detailed H₂ /CO/C₁ mechanism for prediction of oxidation of small hydrocarbons. The combined skeletal jet fuel surrogate mechanism was compressed into 128 species, and 355 reactions and thereby can be used in computational fluid dynamics (CFD) simulation. The extensive validation was performed for individual single-component including ignition delay time, species concentrations profile and laminar flame speed based on various fundamental experiments under wide operating conditions, and for their blended mixture, among all the surrogate, S1 has been extensively validated against the experimental data in a shock tube, rapid compression machine, jet-stirred reactor, counterflow flame, and premixed laminar flame over wide ranges of temperature (700-1700 K), pressure (8-50 atm), and equivalence ratio (0.5-2.0) to capture the properties target fuel Jet-A, while the rest of three surrogate S2, S3 and S4 has been validated for Shock Tube ignition delay time only to capture the ignition characteristic of target fuel S-8 & GTL, IPK and RP-3 respectively. Based on the newly proposed HyChem model, another four surrogate with similar components and composition, was developed and parallel validations data was used as followed for previously developed surrogate but at high-temperature condition only. After testing the mechanism prediction performance of surrogates developed by the decoupling methodology, the comparison was done with the results of surrogates developed by the HyChem model. It was observed that all of four proposed surrogates in this study showed good agreement with the experimental measurements and the study comes to this conclusion that like the decoupling methodology HyChem model also has a great potential for the development of oxidation mechanism for heavy alkanes because of applicability, simplicity, and compactness.

Keywords: computational fluid dynamics, decoupling methodology Hychem, jet fuel, surrogate, skeletal mechanism

Procedia PDF Downloads 109
190 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 274
189 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study

Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni

Abstract:

Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.

Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation

Procedia PDF Downloads 106
188 Dynamic Building Simulation Based Study to Understand Thermal Behavior of High-Rise Structural Timber Buildings

Authors: Timothy O. Adekunle, Sigridur Bjarnadottir

Abstract:

Several studies have investigated thermal behavior of buildings with limited studies focusing on high-rise buildings. Of the limited investigations that have considered thermal performance of high-rise buildings, only a few studies have considered thermal behavior of high-rise structural sustainable buildings. As a result, this study investigates the thermal behavior of a high-rise structural timber building. The study aims to understand the thermal environment of a high-rise structural timber block of apartments located in East London, UK by comparing the indoor environmental conditions at different floors (ground and upper floors) of the building. The environmental variables (temperature and relative humidity) were measured at 15-minute intervals for a few weeks in the summer of 2012 to generate data that was considered for calibration and validation of the simulated results. The study employed mainly dynamic thermal building simulation using DesignBuilder by EnergyPlus and supplemented with environmental monitoring as major techniques for data collection and analysis. The weather file (Test Reference Years- TRYs) for the 2000s from the weather generator carried out by the Prometheus Group was considered for the simulation since the study focuses on investigating thermal behavior of high-rise structural timber buildings in the summertime and not in extreme summertime. In this study, the simulated results (May-September of the 2000s) will be the focus of discussion, but the results will be briefly compared with the environmental monitoring results. The simulated results followed a similar trend with the findings obtained from the short period of the environmental monitoring at the building. The results revealed lower temperatures are often predicted (at least 1.1°C lower) at the ground floor than the predicted temperatures at the upper floors. The simulated results also showed that higher temperatures are predicted in spaces at southeast facing (at least 0.5°C higher) than spaces in other orientations across the floors considered. There is, however, a noticeable difference between the thermal environment of spaces when the results obtained from the environmental monitoring are compared with the simulated results. The field survey revealed higher temperatures were recorded in the living areas (at least 1.0°C higher) while higher temperatures are predicted in bedrooms (at least 0.9°C) than living areas for the simulation. In addition, the simulated results showed spaces on lower floors of high-rise structural timber buildings are predicted to provide more comfortable thermal environment than spaces on upper floors in summer, but this may not be the same in wintertime due to high upward movement of hot air to spaces on upper floors.

Keywords: building simulation, high-rise, structural timber buildings, sustainable, temperatures, thermal behavior

Procedia PDF Downloads 156
187 Instruction Program for Human Factors in Maintenance, Addressed to the People Working in Colombian Air Force Aeronautical Maintenance Area to Strengthen Operational Safety

Authors: Rafael Andres Rincon Barrera

Abstract:

Safety in global aviation plays a preponderant role in organizations that seek to avoid accidents in an attempt to preserve their most precious assets (the people and the machines). Human factors-based programs have shown to be effective in managing human-generated risks. The importance of training on human factors in maintenance has not been indifferent to the Colombian Air Force (COLAF). This research, which has a mixed quantitative, qualitative and descriptive approach, deals with its absence of structuring an instruction program in Human Factors in Aeronautical Maintenance, which serves as a tool to improve Operational Safety in the military air units of the COLAF. Research shows the trends and evolution of human factors programs in aeronautical maintenance through the analysis of a data matrix with 33 sources taken from different databases that are about the incorporation of these types of programs in the aeronautical industry in the last 20 years; as well as the improvements in the operational safety process that are presented after the implementation of these ones. Likewise, it compiles different normative guides in force from world aeronautical authorities for training in these programs, establishing a matrix of methodologies that may be applicable to develop a training program in human factors in maintenance. Subsequently, it illustrates the design, validation, and development of a human factors knowledge measurement instrument for maintenance at the COLAF that includes topics on Human Factors (HF), Safety Management System (SMS), and aeronautical maintenance regulations at the COLAF. With the information obtained, it performs the statistical analysis showing the aspects of knowledge and strengthening the staff for the preparation of the instruction program. Performing data triangulation based on the applicable methods and the weakest aspects found in the maintenance people shows a variable crossing from color coding, thus indicating the contents according to a training program for human factors in aeronautical maintenance, which are adjusted according to the competencies that are expected to be developed with the staff in a curricular format established by the COLAF. Among the most important findings are the determination that different authors are dealing with human factors in maintenance agrees that there is no standard model for its instruction and implementation, but that it must be adapted to the needs of the organization, that the Safety Culture in the Companies which incorporated programs on human factors in maintenance increased, that from the data obtained with the instrument for knowledge measurement of human factors in maintenance, the level of knowledge is MEDIUM-LOW with a score of 61.79%. And finally that there is an opportunity to improve Operational Safety for the COLAF through the implementation of the training program of human factors in maintenance for the technicians working in this area.

Keywords: Colombian air force, human factors, safety culture, safety management system, triangulation

Procedia PDF Downloads 111
186 A Research Study of the Inclusiveness of VR Headsets for Higher Education

Authors: Fredrick Forster, Gareth Ward, Matthew Tubby, Pamela Lithgow, Anne Nortcliffe

Abstract:

This paper presents the results from a research study of random adult participants accessing one of four different commercially available Virtual Reality (VR) Head Mounted Displays (HMDs) and completing a post user experience reflection questionnaire. The research sort to understand how inclusive commercially available VR HMDs are and identify any associated barriers that could impact the widespread adoption of the devices, specifically in Higher Education (HE). In the UK, education providers are legally required under the Equality Act 2010 to ensure all education facilities are inclusive and reasonable adjustments can be applied appropriately. The research specifically aimed to identify the considerations that academics and learning technologists need to make when adopting the use of commercial VR HMDs in HE classrooms, namely cybersickness, user comfort, Interpupillary Distance, inclusiveness, and user perceptions of VR. The research approach was designed to build upon previously published research on user reflections on presence, usability, and overall HMD comfort, using quantitative and qualitative research methods by way of a questionnaire. The quantitative data included the recording of physical characteristics such as the distance between eye pupils, known as Interpupillary Distance (IPD). VR HMDs require each user’s IPD measurement to enable the focusing of the VR HMDs virtual camera output to the right position in front of the eyes of the user. In addition, the questionnaire captured users’ qualitative reflections and evaluations of the broader accessibility characteristics of the VR HMDs. The initial research activity was accomplished by enabling a random sample of visitors, staff, and students at Canterbury Christ Church University, Kent to use a VR HMD for a set period of time and asking them to complete the post user experience questionnaire. The study identified that there is little correlation between users who experience cyber sickness and car sickness. Also, users with a smaller IPD than average (typically associated with females) were able to use the VR HMDs successfully; however, users with a larger than average IPD reported an impeded experience. This indicates that there is reduced inclusiveness for the tested VR HMDs for users with a higher-than-average IPD which is typically associated with males of certain ethnicities. As action education research, these initial findings will be used to refine the research method and conduct further investigations with the aim to provide verification and validation of the accessibility of current commercial VR HMDs. The conference presentation will report on the research results of the initial study and subsequent follow up studies with a larger variety of adult volunteers.

Keywords: virtual reality, education technology, inclusive technology, higher education

Procedia PDF Downloads 41
185 A Data-Driven Optimal Control Model for the Dynamics of Monkeypox in a Variable Population with a Comprehensive Cost-Effectiveness Analysis

Authors: Martins Onyekwelu Onuorah, Jnr Dahiru Usman

Abstract:

Introduction: In the realm of public health, the threat posed by Monkeypox continues to elicit concern, prompting rigorous studies to understand its dynamics and devise effective containment strategies. Particularly significant is its recurrence in variable populations, such as the observed outbreak in Nigeria in 2022. In light of this, our study undertakes a meticulous analysis, employing a data-driven approach to explore, validate, and propose optimized intervention strategies tailored to the distinct dynamics of Monkeypox within varying demographic structures. Utilizing a deterministic mathematical model, we delved into the intricate dynamics of Monkeypox, with a particular focus on a variable population context. Our qualitative analysis provided insights into the disease-free equilibrium, revealing its stability when R0 is less than one and discounting the possibility of backward bifurcation, as substantiated by the presence of a single stable endemic equilibrium. The model was rigorously validated using real-time data from the Nigerian 2022 recorded cases for Epi weeks 1 – 52. Transitioning from qualitative to quantitative, we augmented our deterministic model with optimal control, introducing three time-dependent interventions to scrutinize their efficacy and influence on the epidemic's trajectory. Numerical simulations unveiled a pronounced impact of the interventions, offering a data-supported blueprint for informed decision-making in containing the disease. A comprehensive cost-effectiveness analysis employing the Infection Averted Ratio (IAR), Average Cost-Effectiveness Ratio (ACER), and Incremental Cost-Effectiveness Ratio (ICER) facilitated a balanced evaluation of the interventions’ economic and health impacts. In essence, our study epitomizes a holistic approach to understanding and mitigating Monkeypox, intertwining rigorous mathematical modeling, empirical validation, and economic evaluation. The insights derived not only bolster our comprehension of Monkeypox's intricate dynamics but also unveil optimized, cost-effective interventions. This integration of methodologies and findings underscores a pivotal stride towards aligning public health imperatives with economic sustainability, marking a significant contribution to global efforts in combating infectious diseases.

Keywords: monkeypox, equilibrium states, stability, bifurcation, optimal control, cost-effectiveness

Procedia PDF Downloads 45
184 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 209
183 Construction and Validation of Allied Bank-Teller Aptitude Test

Authors: Muhammad Kashif Fida

Abstract:

In the bank, teller’s job (cash officer) is highly important and critical as at one end it requires soft and brisk customer services and on the other side, handling cash with integrity. It is always challenging for recruiters to hire competent and trustworthy tellers. According to author’s knowledge, there is no comprehensive test available that may provide assistance in recruitment in Pakistan. So there is a dire need of a psychometric battery that could provide support in recruitment of potential candidates for the teller’ position. So, the aim of the present study was to construct ABL-Teller Aptitude Test (ABL-TApT). Three major phases have been designed by following American Psychological Association’s guidelines. The first phase was qualitative, indicators of the test have been explored by content analysis of the a) teller’s job descriptions (n=3), b) interview with senior tellers (n=6) and c) interview with HR personals (n=4). Content analysis of above yielded three border constructs; i). Personality, ii). Integrity/honesty, iii). Professional Work Aptitude. Identified indicators operationalized and statements (k=170) were generated using verbatim. It was then forwarded to the five experts for review of content validity. They finalized 156 items. In the second phase; ABL-TApT (k=156) administered on 323 participants through a computer application. The overall reliability of the test shows significant alpha coefficient (α=.81). Reliability of subscales have also significant alpha coefficients. Confirmatory Factor Analysis (CFA) performed to estimate the construct validity, confirms four main factors comprising of eight personality traits (Confidence, Organized, Compliance, Goal-oriented, Persistent, Forecasting, Patience, Caution), one Integrity/honesty factor, four factors of professional work aptitude (basic numerical ability and perceptual accuracy of letters, numbers and signature) and two factors for customer services (customer services, emotional maturity). Values of GFI, AGFI, NNFI, CFI, RFI and RMSEA are in recommended range depicting significant model fit. In third phase concurrent validity evidences have been pursued. Personality and integrity part of this scale has significant correlations with ‘conscientiousness’ factor of NEO-PI-R, reflecting strong concurrent validity. Customer services and emotional maturity have significant correlations with ‘Bar-On EQI’ showing another evidence of strong concurrent validity. It is concluded that ABL-TAPT is significantly reliable and valid battery of tests, will assist in objective recruitment of tellers and help recruiters in finding a more suitable human resource.

Keywords: concurrent validity, construct validity, content validity, reliability, teller aptitude test, objective recruitment

Procedia PDF Downloads 206