Search results for: parallelism error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1932

Search results for: parallelism error

222 Polypropylene Matrix Enriched With Silver Nanoparticles From Banana Peel Extract For Antimicrobial Control Of E. coli and S. epidermidis To Maintain Fresh Food

Authors: Michail Milas, Aikaterini Dafni Tegiou, Nickolas Rigopoulos, Eustathios Giaouris, Zaharias Loannou

Abstract:

Nanotechnology, a relatively new scientific field, addresses the manipulation of nanoscale materials and devices, which are governed by unique properties, and is applied in a wide range of industries, including food packaging. The incorporation of nanoparticles into polymer matrices used for food packaging is a field that is highly researched today. One such combination is silver nanoparticles with polypropylene. In the present study, the synthesis of the silver nanoparticles was carried out by a natural method. In particular, a ripe banana peel extract was used. This method is superior to others as it stands out for its environmental friendliness, high efficiency and low-cost requirement. In particular, a 1.75 mM AgNO₃ silver nitrate solution was used, as well as a BPE concentration of 1.7% v/v, an incubation period of 48 hours at 70°C and a pH of 4.3 and after its preparation, the polypropylene films were soaked in it. For the PP films, random PP spheres were melted at 170-190°C into molds with 0.8cm diameter. This polymer was chosen as it is suitable for plastic parts and reusable plastic containers of various types that are intended to come into contact with food without compromising its quality and safety. The antimicrobial test against Escherichia coli DFSNB1 and Staphylococcus epidermidis DFSNB4 was performed on the films. It appeared that the films with silver nanoparticles had a reduction, at least 100 times, compared to those without silver nanoparticles, in both strains. The limit of detection is the lower limit of the vertical error lines in the presence of nanoparticles, which is 3.11. The main reasons that led to the adsorption of nanoparticles are the porous nature of polypropylene and the adsorption capacity of nanoparticles on the surface of the films due to hydrophobic-hydrophilic forces. The most significant parameters that contributed to the results of the experiment include the following: the stage of ripening of the banana during the preparation of the plant extract, the temperature and residence time of the nanoparticle solution in the oven, the residence time of the polypropylene films in the nanoparticle solution, the number of nanoparticles inoculated on the films and, finally, the time these stayed in the refrigerator so that they could dry and be ready for antimicrobial treatment.

Keywords: antimicrobial control, banana peel extract, E. coli, natural synthesis, microbe, plant extract, polypropylene films, S.epidermidis, silver nano, random pp

Procedia PDF Downloads 176
221 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 69
220 The ‘Quartered Head Technique’: A Simple, Reliable Way of Maintaining Leg Length and Offset during Total Hip Arthroplasty

Authors: M. Haruna, O. O. Onafowokan, G. Holt, K. Anderson, R. G. Middleton

Abstract:

Background: Requirements for satisfactory outcomes following total hip arthroplasty (THA) include restoration of femoral offset, version, and leg length. Various techniques have been described for restoring these biomechanical parameters, with leg length restoration being the most predominantly described. We describe a “quartered head technique” (QHT) which uses a stepwise series of femoral head osteotomies to identify and preserve the centre of rotation of the femoral head during THA in order to ensure reconstruction of leg length, offset and stem version, such that hip biomechanics are restored as near to normal as possible. This study aims to identify whether using the QHT during hip arthroplasty effectively restores leg length and femoral offset to within acceptable parameters. Methods: A retrospective review of 206 hips was carried out, leaving 124 hips in the final analysis. Power analysis indicated a minimum of 37 patients required. All operations were performed using an anterolateral approach by a single surgeon. All femoral implants were cemented, collarless, polished double taper CPT® stems (Zimmer, Swindon, UK). Both cemented, and uncemented acetabular components were used (Zimmer, Swindon, UK). Leg length, version, and offset were assessed intra-operatively and reproduced using the QHT. Post-operative leg length and femoral offset were determined and compared with the contralateral native hip, and the difference was then calculated. For the determination of leg length discrepancy (LLD), we used the method described by Williamson & Reckling, which has been shown to be reproducible with a measurement error of ±1mm. As a reference, the inferior margin of the acetabular teardrop and the most prominent point of the lesser trochanter were used. A discrepancy of less than 6mm LLD was chosen as acceptable. All peri-operative radiographs were assessed by two independent observers. Results: The mean absolute post-operative difference in leg length from the contralateral leg was +3.58mm. 84% of patients (104/124) had LLD within ±6mm of the contralateral limb. The mean absolute post-operative difference in offset from contralateral leg was +3.88mm (range -15 to +9mm, median 3mm). 90% of patients (112/124) were within ±6mm offset of the contralateral limb. There was no statistical difference noted between observer measurements. Conclusion: The QHT provides a simple, inexpensive yet effective method of maintaining femoral leg length and offset during total hip arthroplasty. Combining this technique with pre-operative templating or other techniques described may enable surgeons to reduce even further the discrepancies between pre-operative state and post-operative outcome.

Keywords: leg length discrepancy, technical tip, total hip arthroplasty, operative technique

Procedia PDF Downloads 81
219 The Use of Corpora in Improving Modal Verb Treatment in English as Foreign Language Textbooks

Authors: Lexi Li, Vanessa H. K. Pang

Abstract:

This study aims to demonstrate how native and learner corpora can be used to enhance modal verb treatment in EFL textbooks in mainland China. It contributes to a corpus-informed and learner-centered design of grammar presentation in EFL textbooks that enhances the authenticity and appropriateness of textbook language for target learners. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the 'secondary school' section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was analyzed in terms of the use (distributional features, semantic functions, and co-occurring constructions) and the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The analysis of distribution indicates several discrepancies between the textbook corpus and BNCS2014. The first four most frequent modal verbs in BNCS2014 are can, would, will, could, while can, will, should, could are the top four in the textbooks. Most strikingly, there is an unusually high proportion of can (41.1%) in the textbooks. The results on different meanings shows that will, would and must are the most problematic. For example, for will, the textbooks contain 20% more occurrences of 'volition' and 20% less of 'prediction' than those in BNCS2014. Regarding co-occurring structures, the textbooks over-represented the structure 'modal +do' across the nine modal verbs. Another major finding is that the structure of 'modal +have done' that frequently co-occur with could, would, should, and must is underused in textbooks. Besides, these four modal verbs are the most difficult for learners, as the error analysis shows. This study demonstrates how the synergy of native and learner corpora can be harnessed to improve EFL textbook presentation of modal verbs in a way that textbooks can provide not only authentic language used in natural discourse but also appropriate design tailed for the needs of target learners.

Keywords: English as Foreign Language, EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 142
218 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Iran: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: Crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Iran using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in VECM suggests that all energy consumption variables in this study have significant impacts on GDP in the long term. The consumption of petroleum products and the direct combustion of crude oil and natural gas decrease GDP, while the coal and electricity use enhanced the GDP between 1980-2010 in Iran. In the short term, only electricity use enhances the GDP as well as its long-run effects. All variables of this study, except the CO2 emissions, show significant effects on the GDP in the country for the long term. The long-run equilibrium in VECM suggests that the consumption of petroleum products and the direct combustion of crude oil and natural gas use have positive impacts on the GDP while the consumptions of electricity and coal have adverse impacts on the GDP in the long term. In the short run, electricity use enhances the GDP over period of 1980-2010 in Iran. Overall, the results partly support arguments that there are relationships between energy use and economic output, but the associations can be differed by the sources of energy in the case of Iran over period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.

Keywords: CO2 emissions, energy consumption, GDP, Iran, time series analysis

Procedia PDF Downloads 592
217 Landsat Data from Pre Crop Season to Estimate the Area to Be Planted with Summer Crops

Authors: Valdir Moura, Raniele dos Anjos de Souza, Fernando Gomes de Souza, Jose Vagner da Silva, Jerry Adriani Johann

Abstract:

The estimate of the Area of Land to be planted with annual crops and its stratification by the municipality are important variables in crop forecast. Nowadays in Brazil, these information’s are obtained by the Brazilian Institute of Geography and Statistics (IBGE) and published under the report Assessment of the Agricultural Production. Due to the high cloud cover in the main crop growing season (October to March) it is difficult to acquire good orbital images. Thus, one alternative is to work with remote sensing data from dates before the crop growing season. This work presents the use of multitemporal Landsat data gathered on July and September (before the summer growing season) in order to estimate the area of land to be planted with summer crops in an area of São Paulo State, Brazil. Geographic Information Systems (GIS) and digital image processing techniques were applied for the treatment of the available data. Supervised and non-supervised classifications were used for data in digital number and reflectance formats and the multitemporal Normalized Difference Vegetation Index (NDVI) images. The objective was to discriminate the tracts with higher probability to become planted with summer crops. Classification accuracies were evaluated using a sampling system developed basically for this study region. The estimated areas were corrected using the error matrix derived from these evaluations. The classification techniques presented an excellent level according to the kappa index. The proportion of crops stratified by municipalities was derived by a field work during the crop growing season. These proportion coefficients were applied onto the area of land to be planted with summer crops (derived from Landsat data). Thus, it was possible to derive the area of each summer crop by the municipality. The discrepancies between official statistics and our results were attributed to the sampling and the stratification procedures. Nevertheless, this methodology can be improved in order to provide good crop area estimates using remote sensing data, despite the cloud cover during the growing season.

Keywords: area intended for summer culture, estimated area planted, agriculture, Landsat, planting schedule

Procedia PDF Downloads 150
216 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 148
215 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice

Authors: T. Ewetumo, K. D. Adedayo, Festus Ben

Abstract:

Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.

Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation

Procedia PDF Downloads 357
214 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image

Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias

Abstract:

Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.

Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals

Procedia PDF Downloads 73
213 AI Predictive Modeling of Excited State Dynamics in OPV Materials

Authors: Pranav Gunhal., Krish Jhurani

Abstract:

This study tackles the significant computational challenge of predicting excited state dynamics in organic photovoltaic (OPV) materials—a pivotal factor in the performance of solar energy solutions. Time-dependent density functional theory (TDDFT), though effective, is computationally prohibitive for larger and more complex molecules. As a solution, the research explores the application of transformer neural networks, a type of artificial intelligence (AI) model known for its superior performance in natural language processing, to predict excited state dynamics in OPV materials. The methodology involves a two-fold process. First, the transformer model is trained on an extensive dataset comprising over 10,000 TDDFT calculations of excited state dynamics from a diverse set of OPV materials. Each training example includes a molecular structure and the corresponding TDDFT-calculated excited state lifetimes and key electronic transitions. Second, the trained model is tested on a separate set of molecules, and its predictions are rigorously compared to independent TDDFT calculations. The results indicate a remarkable degree of predictive accuracy. Specifically, for a test set of 1,000 OPV materials, the transformer model predicted excited state lifetimes with a mean absolute error of 0.15 picoseconds, a negligible deviation from TDDFT-calculated values. The model also correctly identified key electronic transitions contributing to the excited state dynamics in 92% of the test cases, signifying a substantial concordance with the results obtained via conventional quantum chemistry calculations. The practical integration of the transformer model with existing quantum chemistry software was also realized, demonstrating its potential as a powerful tool in the arsenal of materials scientists and chemists. The implementation of this AI model is estimated to reduce the computational cost of predicting excited state dynamics by two orders of magnitude compared to conventional TDDFT calculations. The successful utilization of transformer neural networks to accurately predict excited state dynamics provides an efficient computational pathway for the accelerated discovery and design of new OPV materials, potentially catalyzing advancements in the realm of sustainable energy solutions.

Keywords: transformer neural networks, organic photovoltaic materials, excited state dynamics, time-dependent density functional theory, predictive modeling

Procedia PDF Downloads 117
212 Stature and Gender Estimation Using Foot Measurements in South Indian Population

Authors: Jagadish Rao Padubidri, Mehak Bhandary, Sowmya J. Rao

Abstract:

Introduction: The significance of the human foot and its measurements in identifying an individual has been proved a lot of times by different studies in different geographical areas and its association to the stature and gender of the individual has been justified by many researches. In our study we have used different foot measurements including the length, width, malleol height and navicular height for establishing its association to stature and gender and to find out its accuracy. The purpose of this study is to show the relation of foot measurements with stature and gender, and to derive Multiple and Logistic regression equations for stature and gender estimation in South Indian population. Materials and Methods: The subjects for this study were 200 South Indian students out of which 100 were females and 100 were males, aged between 18 to 24 years. The data for the present study included the stature, foot length, foot breath, foot malleol height, foot navicular height of both right and left foot. Descriptive statistics, T-test and Pearson correlation coefficients were derived between stature, gender and foot measurements. The stature was estimated from right and left foot measurements for both male and female South Indian population using multiple regression analysis and logistic regression analysis for gender estimation. Results: The means, standard deviation, stature, right and left foot measurements and T-test in male population were higher than in females. LFL (Left foot length) is more than RFL (Right Foot length) in male groups, but in female groups the length of both foot are almost equal [RFL=226.6, LFL=227.1]. There is not much of difference in means of RFW (Right foot width) and LFW (Left foot width) in both the genders. Significant difference were seen in mean values of malleol and navicular height of right and left feet in male gender. No such difference was seen in female subjects. Conclusions: The study has successfully demonstrated the correlation of foot length in stature estimation in all the three study groups in both right and left foot. Next in parameters are Foot width and malleol height in estimating stature among male and female groups. Navicular height of both right and left foot showed poor relationship with stature estimation in both male and female groups. Multiple regression equations for both right and left foot measurements to estimate stature were derived with standard error ranging from 11-12 cm in males and 10-11 cm in females. The SEE was 5.8 when both male and female groups were pooled together. The logistic regression model which was derived to determine gender showed 85% accuracy and 92.5% accuracy using right and left foot measurements respectively. We believe that stature and gender can be estimated with foot measurements in South Indian population.

Keywords: foot length, gender, stature, South Indian

Procedia PDF Downloads 335
211 Collaborative Data Refinement for Enhanced Ionic Conductivity Prediction in Garnet-Type Materials

Authors: Zakaria Kharbouch, Mustapha Bouchaara, F. Elkouihen, A. Habbal, A. Ratnani, A. Faik

Abstract:

Solid-state lithium-ion batteries have garnered increasing interest in modern energy research due to their potential for safer, more efficient, and sustainable energy storage systems. Among the critical components of these batteries, the electrolyte plays a pivotal role, with LLZO garnet-based electrolytes showing significant promise. Garnet materials offer intrinsic advantages such as high Li-ion conductivity, wide electrochemical stability, and excellent compatibility with lithium metal anodes. However, optimizing ionic conductivity in garnet structures poses a complex challenge, primarily due to the multitude of potential dopants that can be incorporated into the LLZO crystal lattice. The complexity of material design, influenced by numerous dopant options, requires a systematic method to find the most effective combinations. This study highlights the utility of machine learning (ML) techniques in the materials discovery process to navigate the complex range of factors in garnet-based electrolytes. Collaborators from the materials science and ML fields worked with a comprehensive dataset previously employed in a similar study and collected from various literature sources. This dataset served as the foundation for an extensive data refinement phase, where meticulous error identification, correction, outlier removal, and garnet-specific feature engineering were conducted. This rigorous process substantially improved the dataset's quality, ensuring it accurately captured the underlying physical and chemical principles governing garnet ionic conductivity. The data refinement effort resulted in a significant improvement in the predictive performance of the machine learning model. Originally starting at an accuracy of 0.32, the model underwent substantial refinement, ultimately achieving an accuracy of 0.88. This enhancement highlights the effectiveness of the interdisciplinary approach and underscores the substantial potential of machine learning techniques in materials science research.

Keywords: lithium batteries, all-solid-state batteries, machine learning, solid state electrolytes

Procedia PDF Downloads 61
210 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou

Abstract:

Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.

Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity

Procedia PDF Downloads 270
209 Efficacy of Opicapone and Levodopa with Different Levodopa Daily Doses in Parkinson’s Disease Patients with Early Motor Fluctuations: Findings from the Korean ADOPTION Study

Authors: Jee-Young Lee, Joaquim J. Ferreira, Hyeo-il Ma, José-Francisco Rocha, Beomseok Jeon

Abstract:

The effective management of wearing-off is a key driver of medication changes for patients with Parkinson’s disease (PD) treated with levodopa (L-DOPA). While L-DOPA is well tolerated and efficacious, its clinical utility over time is often limited by the development of complications such as dyskinesia. Still, common first-line option includes adjusting the daily L-DOPA dose followed by adjunctive therapies usually counting for the L-DOPA equivalent daily dose (LEDD). The LEDD conversion formulae are a tool used to compare the equivalence of anti-PD medications. The aim of this work is to compare the effects of opicapone (OPC) 50 mg, a catechol-O-methyltransferase (COMT) inhibitor, and an additional 100 mg dose of L-DOPA in reducing the off time in PD patients with early motor fluctuations receiving different daily L-DOPA doses. OPC was found to be well tolerated and efficacious in advanced PD population. This work utilized patients' home diary data from a 4-week Phase 2 pharmacokinetics clinical study. The Korean ADOPTION study randomized (1:1) patients with PD and early motor fluctuations treated with up to 600 mg of L-DOPA given 3–4 times daily. The main endpoint was change from baseline in off time in the subgroup of patients receiving 300–400 mg/day L-DOPA at baseline plus OPC 50 mg and in the subgroup receiving >300 mg/day L-DOPA at baseline plus an additional dose of L-DOPA 100 mg. Of the 86 patients included in this subgroup analysis, 39 received OPC 50 mg and 47 L-DOPA 100 mg. At baseline, both L-DOPA total daily dose and LEDD were lower in the L-DOPA 300–400 mg/day plus OPC 50 mg group than in the L-DOPA >300 mg/day plus L-DOPA 100 mg. However, at Week 4, LEDD was similar between the two groups. The mean (±standard error) reduction in off time was approximately three-fold greater for the OPC 50 mg than for the L-DOPA 100 mg group, being -63.0 (14.6) minutes for patients treated with L-DOPA 300–400 mg/day plus OPC 50 mg, and -22.1 (9.3) minutes for those receiving L-DOPA >300 mg/day plus L-DOPA 100 mg. In conclusion, despite similar LEDD, OPC demonstrated a significantly greater reduction in off time when compared to an additional 100 mg L-DOPA dose. The effect of OPC appears to be LEDD independent, suggesting that caution should be exercised when employing LEDD to guide treatment decisions as this does not take into account the timing of each dose, onset, duration of therapeutic effect and individual responsiveness. Additionally, OPC could be used for keeping the L-DOPA dose as low as possible for as long as possible to avoid the development of motor complications which are a significant source of disability.

Keywords: opicapone, levodopa, pharmacokinetics, off-time

Procedia PDF Downloads 62
208 Measuring Fluctuating Asymmetry in Human Faces Using High-Density 3D Surface Scans

Authors: O. Ekrami, P. Claes, S. Van Dongen

Abstract:

Fluctuating asymmetry (FA) has been studied for many years as an indicator of developmental stability or ‘genetic quality’ based on the assumption that perfect symmetry is ideally the expected outcome for a bilateral organism. Further studies have also investigated the possible link between FA and attractiveness or levels of masculinity or femininity. These hypotheses have been mostly examined using 2D images, and the structure of interest is usually presented using a limited number of landmarks. Such methods have the downside of simplifying and reducing the dimensionality of the structure, which will in return increase the error of the analysis. In an attempt to reach more conclusive and accurate results, in this study we have used high-resolution 3D scans of human faces and have developed an algorithm to measure and localize FA, taking a spatially-dense approach. A symmetric spatially dense anthropometric mask with paired vertices is non-rigidly mapped on target faces using an Iterative Closest Point (ICP) registration algorithm. A set of 19 manually indicated landmarks were used to examine the precision of our mapping step. The protocol’s accuracy in measurement and localizing FA is assessed using simulated faces with known amounts of asymmetry added to them. The results of validation of our approach show that the algorithm is perfectly capable of locating and measuring FA in 3D simulated faces. With the use of such algorithm, the additional captured information on asymmetry can be used to improve the studies of FA as an indicator of fitness or attractiveness. This algorithm can especially be of great benefit in studies of high number of subjects due to its automated and time-efficient nature. Additionally, taking a spatially dense approach provides us with information about the locality of FA, which is impossible to obtain using conventional methods. It also enables us to analyze the asymmetry of a morphological structures in a multivariate manner; This can be achieved by using methods such as Principal Components Analysis (PCA) or Factor Analysis, which can be a step towards understanding the underlying processes of asymmetry. This method can also be used in combination with genome wide association studies to help unravel the genetic bases of FA. To conclude, we introduced an algorithm to study and analyze asymmetry in human faces, with the possibility of extending the application to other morphological structures, in an automated, accurate and multi-variate framework.

Keywords: developmental stability, fluctuating asymmetry, morphometrics, 3D image processing

Procedia PDF Downloads 140
207 Development of Three-Dimensional Groundwater Model for Al-Corridor Well Field, Amman–Zarqa Basin

Authors: Moayyad Shawaqfah, Ibtehal Alqdah, Amjad Adaileh

Abstract:

Coridoor area (400 km2) lies to the north – east of Amman (60 km). It lies between 285-305 E longitude and 165-185 N latitude (according to Palestine Grid). It been subjected to exploitation of groundwater from new eleven wells since the 1999 with a total discharge of 11 MCM in addition to the previous discharge rate from the well field 14.7 MCM. Consequently, the aquifer balance is disturbed and a major decline in water level. Therefore, suitable groundwater resources management is required to overcome the problems of over pumping and its effect on groundwater quality. Three–dimensional groundwater flow model Processing Modeflow for Windows Pro (PMWIN PRO, 2003) has been used in order to calculate the groundwater budget, aquifer characteristics, and to predict the aquifer response under different stresses for the next 20 years (2035). The model was calibrated for steady state conditions by trial and error calibration. The calibration was performed by matching observed and calculated initial heads for year 2001. Drawdown data for period 2001-2010 were used to calibrate transient model by matching calculated with observed one, after that, the transient model was validated by using the drawdown data for the period 2011-2014. The hydraulic conductivities of the Basalt- A7/B2 aquifer System are ranging between 1.0 and 8.0 m/day. The low conductivity value was found at the north-west and south-western parts of the study area, the high conductivity value was found at north-western corner of the study area and the average storage coefficient is about 0.025. The water balance for the Basalt and B2/A7 formation at steady state condition with a discrepancy of 0.003%. The major inflows come from Jebal Al Arab through the basalt and through the limestone aquifer (B2/A7 12.28 MCMY aquifer and from excess rainfall is about 0.68 MCM/a. While the major outflows from the Basalt-B2/A7 aquifer system are toward Azraq basin with about 5.03 MCMY and leakage to A1/6 aquitard with 7.89 MCMY. Four scenarios have been performed to predict aquifer system responses under different conditions. Scenario no.2 was found to be the best one which indicates that the reduction the abstraction rates by 50% of current withdrawal rate (25.08 MCMY) to 12.54 MCMY. The maximum drawdowns were decreased to reach about, 7.67 and 8.38m in the years 2025 and 2035 respectively.

Keywords: Amman/Zarqa Basin, Jordan, groundwater management, groundwater modeling, modflow

Procedia PDF Downloads 216
206 Utilizing Topic Modelling for Assessing Mhealth App’s Risks to Users’ Health before and during the COVID-19 Pandemic

Authors: Pedro Augusto Da Silva E Souza Miranda, Niloofar Jalali, Shweta Mistry

Abstract:

BACKGROUND: Software developers utilize automated solutions to scrape users’ reviews to extract meaningful knowledge to identify problems (e.g., bugs, compatibility issues) and possible enhancements (e.g., users’ requests) to their solutions. However, most of these solutions do not consider the health risk aspects to users. Recent works have shed light on the importance of including health risk considerations in the development cycle of mHealth apps to prevent harm to its users. PROBLEM: The COVID-19 Pandemic in Canada (and World) is currently forcing physical distancing upon the general population. This new lifestyle made the usage of mHealth applications more essential than ever, with a projected market forecast of 332 billion dollars by 2025. However, this new insurgency in mHealth usage comes with possible risks to users’ health due to mHealth apps problems (e.g., wrong insulin dosage indication due to a UI error). OBJECTIVE: These works aim to raise awareness amongst mHealth developers of the importance of considering risks to users’ health within their development lifecycle. Moreover, this work also aims to help mHealth developers with a Proof-of-Concept (POC) solution to understand, process, and identify possible health risks to users of mHealth apps based on users’ reviews. METHODS: We conducted a mixed-method study design. We developed a crawler to mine the negative reviews from two samples of mHealth apps (my fitness, medisafe) from the Google Play store users. For each mHealth app, we performed the following steps: • The reviews are divided into two groups, before starting the COVID-19 (reviews’ submission date before 15 Feb 2019) and during the COVID-19 (reviews’ submission date starts from 16 Feb 2019 till Dec 2020). For each period, the Latent Dirichlet Allocation (LDA) topic model was used to identify the different clusters of reviews based on similar topics of review The topics before and during COVID-19 are compared, and the significant difference in frequency and severity of similar topics are identified. RESULTS: We successfully scraped, filtered, processed, and identified health-related topics in both qualitative and quantitative approaches. The results demonstrated the similarity between topics before and during the COVID-19.

Keywords: natural language processing (NLP), topic modeling, mHealth, COVID-19, software engineering, telemedicine, health risks

Procedia PDF Downloads 130
205 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure

Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik

Abstract:

Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.

Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT

Procedia PDF Downloads 128
204 Modeling of the Heat and Mass Transfer in Fluids through Thermal Pollution in Pipelines

Authors: V. Radulescu, S. Dumitru

Abstract:

Introduction: Determination of the temperature field inside a fluid in motion has many practical issues, especially in the case of turbulent flow. The phenomenon is greater when the solid walls have a different temperature than the fluid. The turbulent heat and mass transfer have an essential role in case of the thermal pollution, as it was the recorded during the damage of the Thermoelectric Power-plant Oradea (closed even today). Basic Methods: Solving the theoretical turbulent thermal pollution represents a particularly difficult problem. By using the semi-empirical theories or by simplifying the made assumptions, based on the experimental measurements may be assured the elaboration of the mathematical model for further numerical simulations. The three zones of flow are analyzed separately: the vicinity of the solid wall, the turbulent transition zone, and the turbulent core. For each area are determined the distribution law of temperature. It is determined the dependence of between the Stanton and Prandtl numbers with correction factors, based on measurements experimental. Major Findings/Results: The limitation of the laminar thermal substrate was determined based on the theory of Landau and Levice, using the assumption that the longitudinal component of the velocity pulsation and the pulsation’s frequency varies proportionally with the distance to the wall. For the calculation of the average temperature, the formula is used a similar solution as for the velocity, by an analogous mediation. On these assumptions, the numerical modeling was performed with a gradient of temperature for the turbulent flow in pipes (intact or damaged, with cracks) having 4 different diameters, between 200-500 mm, as there were in the Thermoelectric Power-plant Oradea. Conclusions: It was made a superposition between the molecular viscosity and the turbulent one, followed by addition between the molecular and the turbulent transfer coefficients, necessary to elaborate the theoretical and the numerical modeling. The concept of laminar boundary layer has a different thickness when it is compared the flow with heat transfer and that one without a temperature gradient. The obtained results are within the margin of error of 5%, between the semi-empirical classical theories and the developed model, based on the experimental data. Finally, it is obtained a general correlation between the Stanton number and the Prandtl number, for a specific flow (with associated Reynolds number).

Keywords: experimental measurements, numerical correlations, thermal pollution through pipelines, turbulent thermal flow

Procedia PDF Downloads 164
203 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational

Authors: Marco Sewald

Abstract:

Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.

Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating

Procedia PDF Downloads 221
202 Comparing Two Unmanned Aerial Systems in Determining Elevation at the Field Scale

Authors: Brock Buckingham, Zhe Lin, Wenxuan Guo

Abstract:

Accurate elevation data is critical in deriving topographic attributes for the precision management of crop inputs, especially water and nutrients. Traditional ground-based elevation data acquisition is time consuming, labor intensive, and often inconvenient at the field scale. Various unmanned aerial systems (UAS) provide the capability of generating digital elevation data from high-resolution images. The objective of this study was to compare the performance of two UAS with different global positioning system (GPS) receivers in determining elevation at the field scale. A DJI Phantom 4 Pro and a DJI Phantom 4 RTK(real-time kinematic) were applied to acquire images at three heights, including 40m, 80m, and 120m above ground. Forty ground control panels were placed in the field, and their geographic coordinates were determined using an RTK GPS survey unit. For each image acquisition using a UAS at a particular height, two elevation datasets were generated using the Pix4D stitching software: a calibrated dataset using the surveyed coordinates of the ground control panels and an uncalibrated dataset without using the surveyed coordinates of the ground control panels. Elevation values for each panel derived from the elevation model of each dataset were compared to the corresponding coordinates of the ground control panels. The coefficient of the determination (R²) and the root mean squared error (RMSE) were used as evaluation metrics to assess the performance of each image acquisition scenario. RMSE values for the uncalibrated elevation dataset were 26.613 m, 31.141 m, and 25.135 m for images acquired at 120 m, 80 m, and 40 m, respectively, using the Phantom 4 Pro UAS. With calibration for the same UAS, the accuracies were significantly improved with RMSE values of 0.161 m, 0.165, and 0.030 m, respectively. The best results showed an RMSE of 0.032 m and an R² of 0.998 for calibrated dataset generated using the Phantom 4 RTK UAS at 40m height. The accuracy of elevation determination decreased as the flight height increased for both UAS, with RMSE values greater than 0.160 m for the datasets acquired at 80 m and 160 m. The results of this study show that calibration with ground control panels improves the accuracy of elevation determination, especially for the UAS with a regular GPS receiver. The Phantom 4 Pro provides accurate elevation data with substantial surveyed ground control panels for the 40 m dataset. The Phantom 4 Pro RTK UAS provides accurate elevation at 40 m without calibration for practical precision agriculture applications. This study provides valuable information on selecting appropriate UAS and flight heights in determining elevation for precision agriculture applications.

Keywords: unmanned aerial system, elevation, precision agriculture, real-time kinematic (RTK)

Procedia PDF Downloads 164
201 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 255
200 Characterization and Modelling of Groundwater Flow towards a Public Drinking Water Well Field: A Case Study of Ter Kamerenbos Well Field

Authors: Buruk Kitachew Wossenyeleh

Abstract:

Groundwater is the largest freshwater reservoir in the world. Like the other reservoirs of the hydrologic cycle, it is a finite resource. This study focused on the groundwater modeling of the Ter Kamerenbos well field to understand the groundwater flow system and the impact of different scenarios. The study area covers 68.9Km2 in the Brussels Capital Region and is situated in two river catchments, i.e., Zenne River and Woluwe Stream. The aquifer system has three layers, but in the modeling, they are considered as one layer due to their hydrogeological properties. The catchment aquifer system is replenished by direct recharge from rainfall. The groundwater recharge of the catchment is determined using the spatially distributed water balance model called WetSpass, and it varies annually from zero to 340mm. This groundwater recharge is used as the top boundary condition for the groundwater modeling of the study area. During the groundwater modeling using Processing MODFLOW, constant head boundary conditions are used in the north and south boundaries of the study area. For the east and west boundaries of the study area, head-dependent flow boundary conditions are used. The groundwater model is calibrated manually and automatically using observed hydraulic heads in 12 observation wells. The model performance evaluation showed that the root means the square error is 1.89m and that the NSE is 0.98. The head contour map of the simulated hydraulic heads indicates the flow direction in the catchment, mainly from the Woluwe to Zenne catchment. The simulated head in the study area varies from 13m to 78m. The higher hydraulic heads are found in the southwest of the study area, which has the forest as a land-use type. This calibrated model was run for the climate change scenario and well operation scenario. Climate change may cause the groundwater recharge to increase by 43% and decrease by 30% in 2100 from current conditions for the high and low climate change scenario, respectively. The groundwater head varies for a high climate change scenario from 13m to 82m, whereas for a low climate change scenario, it varies from 13m to 76m. If doubling of the pumping discharge assumed, the groundwater head varies from 13m to 76.5m. However, if the shutdown of the pumps is assumed, the head varies in the range of 13m to 79m. It is concluded that the groundwater model is done in a satisfactory way with some limitations, and the model output can be used to understand the aquifer system under steady-state conditions. Finally, some recommendations are made for the future use and improvement of the model.

Keywords: Ter Kamerenbos, groundwater modelling, WetSpass, climate change, well operation

Procedia PDF Downloads 152
199 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy

Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren

Abstract:

Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.

Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment

Procedia PDF Downloads 36
198 What Are the Problems in the Case of Analysis of Selenium by Inductively Coupled Plasma Mass Spectrometry in Food and Food Raw Materials?

Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Dávid Andrási

Abstract:

For analysis of elements in different food, feed and food raw material samples generally a flame atomic absorption spectrometer (FAAS), a graphite furnace atomic absorption spectrometer (GF-AAS), an inductively coupled plasma optical emission spectrometer (ICP-OES) and an inductively coupled plasma mass spectrometer (ICP-MS) are applied. All the analytical instruments have different physical and chemical interfering effects analysing food and food raw material samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays, it is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium (arsenic, germanium, vanadium, and chromium). To elaborate an analytical method for selenium with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) isobaric elemental, 2) isobaric molecular, and 3) physical interferences. Analysing food and food raw material samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food, feed and food raw material samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of selenium. So finally we could find “opportunities” to decrease the error of selenium analysis. To analyse selenium in food, feed and food raw material samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of Se, which can be corrected using internal standard (arsenic or tellurium).

Keywords: selenium, ICP-MS, food, food raw material

Procedia PDF Downloads 508
197 Segmented Pupil Phasing with Deep Learning

Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan

Abstract:

Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.

Keywords: wavefront sensing, deep learning, deployable telescope, space telescope

Procedia PDF Downloads 104
196 Self-Esteem on University Students by Gender and Branch of Study

Authors: Antonio Casero Martínez, María de Lluch Rayo Llinas

Abstract:

This work is part of an investigation into the relationship between romantic love and self-esteem in college students, performed by the students of matter "methods and techniques of social research", of the Master Gender at the University of Balearic Islands, during 2014-2015. In particular, we have investigated the relationships that may exist between self-esteem, gender and field of study. They are known as gender differences in self-esteem, and the relationship between gender and branch of study observed annually by the distribution of enrolment in universities. Therefore, in this part of the study, we focused the spotlight on the differences in self-esteem between the sexes through the various branches of study. The study sample consists of 726 individuals (304 men and 422 women) from 30 undergraduate degrees that the University of the Balearic Islands offers on its campus in 2014-2015, academic year. The average age of men was 21.9 years and 21.7 years for women. The sampling procedure used was random sampling stratified by degree, simple affixation, giving a sampling error of 3.6% for the whole sample, with a confidence level of 95% under the most unfavorable situation (p = q). The Spanish translation of the Rosenberg Self-Esteen Scale (RSE), by Atienza, Moreno and Balaguer was applied. The psychometric properties of translation reach a test-retest reliability of 0.80 and an internal consistency between 0.76 and 0.87. In this paper we have obtained an internal consistency of 0.82. The results confirm the expected differences in self-esteem by gender, although not in all branches of study. Mean levels of self-esteem in women are lower in all branches of study, reaching statistical significance in the field of Science, Social Sciences and Law, and Engineering and Architecture. However, analysed the variability of self-esteem by the branch of study within each gender, the results show independence in the case of men, whereas in the case of women find statistically significant differences, arising from lower self-esteem of Arts and Humanities students vs. the Social and legal Sciences students. These findings confirm the results of numerous investigations in which the levels of female self-esteem appears always below the male, suggesting that perhaps we should consider separately the two populations rather than continually emphasize the difference. The branch of study, for its part has not appeared as an explanatory factor of relevance, beyond detected the largest absolute difference between gender in the technical branch, one in which women are historically a minority, ergo, are no disciplinary or academic characteristics which would explain the differences, but the differentiated social context that occurs within it.

Keywords: study branch, gender, self-esteem, applied psychology

Procedia PDF Downloads 465
195 Exploring Language Attrition Through Processing: The Case of Mising Language in Assam

Authors: Chumki Payun, Bidisha Som

Abstract:

The Mising language, spoken by the Mising community in Assam, belongs to the Tibeto-Burman family of languages. This is one of the smaller languages of the region and is facing endangerment due to the dominance of the larger languages, like Assamese. The language is spoken in close in-group scenarios and is gradually losing ground to the dominant languages, partly also due to the education setup where schools use only dominant languages. While there are a number of factors for the current contemporary status of the language, and those can be studied using sociolinguistic tools, the current work aims to contribute to the understanding of language attrition through language processing in order to establish if the effect of second language dominance is more than mere ‘usage’ patterns and has an impact on cognitive strategies. When bilingualism spreads widely in society and results in a language shift, speakers perform people often do better in their second language (L2) than in their first language (L1) across a variety of task settings, in both comprehension and production tasks. This phenomenon was investigated in the case of Mising-Assamese bilinguals, using a picture naming task, in two districts of Jorhat and Tinsukia in Assam, where the relative dominance of L2 is slightly different. This explorative study aimed to investigate if the L2 dominance is visible in their performance and also if the pattern is different in the two different places, thus pointing to the degree of language loss in this case. The findings would have implications for native language education, as education in one’s mother tongue can help reverse the effect of language attrition helping preserve the traditional knowledge system. The hypothesis was that due to the dominance of the L2, subjects’ performance in the task would be better in Assamese than that of Missing. The experiment: Mising-Assamese bilingual participants (age ranges 21-31; N= 20 each from both districts) had to perform a picture naming task in which participants were shown pictures of familiar objects and asked to name them in four scenarios: (a) only in Mising; (b) only in Assamese; (c) a cued mix block: an auditory cue determines the language in which to name the object, and (d) non-cued mix block: participants are not given any specific language cues, but instructed to name the pictures in whichever language they feel most comfortable. The experiment was designed and executed using E-prime 3.0 and was conducted responses were recorded using the help of a Chronos response box and was recorded with the help of a recorder. Preliminary analysis reveals the presence of dominance of L2 over L1. The paper will present a comparison of the response latency, error analysis, and switch cost in L1 and L2 and explain the same from the perspective of language attrition.

Keywords: bilingualism, language attrition, language processing, Mising language.

Procedia PDF Downloads 22
194 Effect of Downstream Pressure in Tuning the Flow Control Orifices of Pressure Fed Reaction Control System Thrusters

Authors: Prakash M.N, Mahesh G, Muhammed Rafi K.M, Shiju P. Nair

Abstract:

Introduction: In launch vehicle missions, Reaction Control thrusters are being used for the three-axis stabilization of the vehicle during the coasting phases. A pressure-fed propulsion system is used for the operation of these thrusters due to its less complexity. In liquid stages, these thrusters are designed to draw propellant from the same tank used for the main propulsion system. So in order to regulate the propellant flow rates of these thrusters, flow control orifices are used in feed lines. These orifices are calibrated separately as per the flow rate requirement of individual thrusters for the nominal operating conditions. In some missions, it was observed that the thrusters were operated at higher thrust than nominal. This point was addressed through a series of cold flow and hot tests carried out in-ground and this paper elaborates the details of the same. Discussion: In order to find out the exact reason for this phenomenon, two flight configuration thrusters were identified and hot tested in the ground with calibrated orifices and feed lines. During these tests, the chamber pressure, which is directly proportional to the thrust, is measured. In both cases, chamber pressures higher than the nominal by 0.32bar to 0.7bar were recorded. The increase in chamber pressure is due to an increase in the oxidizer flow rate of both the thrusters. Upon further investigation, it is observed that the calibration of the feed line is done with ambient pressure downstream. But in actual flight conditions, the orifices will be subjected to operate with 10 to 11bar pressure downstream. Due to this higher downstream pressure, the flow through the orifices increases and thereby, the thrusters operate with higher chamber pressure values. Conclusion: As part of further investigatory tests, two numbers of fresh thrusters were realized. Orifice tuning of these thrusters was carried out in three different ways. In the first trial, the orifice tuning was done by simulating 1bar pressure downstream. The second trial was done with the injector assembled downstream. In the third trial, the downstream pressure equal to the flight injection pressure was simulated downstream. Using these calibrated orifices, hot tests were carried out in simulated vacuum conditions. Chamber pressure and flow rate values were exactly matching with the prediction for the second and third trials. But for the first trial, the chamber pressure values obtained in the hot test were more than the prediction. This clearly shows that the flow is detached in the 1st trial and attached for the 2nd & 3rd trials. Hence, the error in tuning the flow control orifices is pinpointed as the reason for this higher chamber pressure observed in flight.

Keywords: reaction control thruster, propellent, orifice, chamber pressure

Procedia PDF Downloads 201
193 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.

Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence

Procedia PDF Downloads 144