Search results for: factor models
9622 Trend and Distribution of Heavy Metals in Soil and Sediment: North of Thailand Region
Authors: Chatkaew Tansakul, Saovajit Nanruksa, Surasak Chonchirdsin
Abstract:
Heavy metals in the environment can be occurred by both natural weathering process and human activity, which may present significant risks to human health and the wider environment. A number of heavy metals, i.e. Arsenic (As) and Manganese (Mn), are found with a relatively high concentration in the northern part of Thailand that was assumptively from natural parent rocks and materials. However, scarce literature is challenging to identify the accurate root cause and best available explanation. This study is, therefore, aim to gather heavy metals data in 5 provinces of the North of Thailand where PTT Exploration and Production (PTTEP) public company limited has operated for more than 20 years. A thousand heavy metal analysis is collected and interpreted in term of Enrichment Factor (EF). The trend and distribution of heavy metals in soil and sediment are analyzed by considering altogether the geochemistry of the regional soil and rock. . In addition, the relationship between land use and heavy metals distribution is investigated. In the first conclusion, heavy metal concentrations of (As) and (Mn) in the studied areas are equal to 7.0 and 588.6 ppm, respectively, which are comparable to those in regional parent materials (1 – 12 and 850 – 1,000 ppm for As and Mn respectively). Moreover, there is an insignificant escalation of the heavy metals in these studied areas over two decades.Keywords: contaminated soil, enrichment factor, heavy metals, parent materials in North of Thailand
Procedia PDF Downloads 1569621 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis
Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen
Abstract:
The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision
Procedia PDF Downloads 1269620 The Parental Involvement as Predictor of Happiness in School-Aged Children
Authors: Giedre Sirvinskiene, Kastytis Smigelskas
Abstract:
Quality of family relations is an important factor of child development, however, the role of joint family activities on adolescent happiness still needs investigation. The aim of this study is to analyze associations between happiness of school-aged children and parental involvement. The analysis involves Lithuanian data from the cross-sectional Health Behaviour in School Aged Children (HBSC) study. The sample comprised 5730 children aged 11–15 years. Results: The odds of happiness was 2.38 times higher if children were living together with mother (95% CI: 1.81–3.13) and 1.81 times – with father (95% CI: 1.53–2.15). However, the likelihood of happiness was 7.21 times lower if adolescent had difficulties to talk with mother (95% CI: 5.42–9.61) and 6.40 times – with father (95% CI: 4.80–8.56). The joint daily adolescents-parents activities also predict the odds for happiness: joint TV watching by 5.96 times (95% CI: 4.21–8.43), having meals together by 7.02 times (95% CI: 4.77–10.32), going for a walk together 4.30 times (95% CI: 2.96–6.26), visiting places by 6.85 times (95% CI: 4.74–9.90), visiting friends and relatives by 7.13 times (95% CI: 4.87–10.43), sporting by 2.76 (95% CI: 1.83–4.18) as well as discussing various things by 7.35 times (95% CI: 5.50–9.82). Conclusions: Joint parents-adolescents activities and communication are related with greater happiness of adolescent. Though adolescence is a period when the relationships with peers get more importance, the communication and joint activities with parents remain a significant factor of adolescent happiness.Keywords: adolescent, family, happiness, school-age
Procedia PDF Downloads 2529619 Association of the Time in Targeted Blood Glucose Range of 3.9–10 Mmol/L with the Mortality of Critically Ill Patients with or without Diabetes
Authors: Guo Yu, Haoming Ma, Peiru Zhou
Abstract:
BACKGROUND: In addition to hyperglycemia, hypoglycemia, and glycemic variability, a decrease in the time in the targeted blood glucose range (TIR) may be associated with an increased risk of death for critically ill patients. However, the relationship between the TIR and mortality may be influenced by the presence of diabetes and glycemic variability. METHODS: A total of 998 diabetic and non-diabetic patients with severe diseases in the ICU were selected for this retrospective analysis. The TIR is defined as the percentage of time spent in the target blood glucose range of 3.9–10.0 mmol/L within 24 hours. The relationship between TIR and in-hospital in diabetic and non-diabetic patients was analyzed. The effect of glycemic variability was also analyzed. RESULTS: The binary logistic regression model showed that there was a significant association between the TIR as a continuous variable and the in-hospital death of severely ill non-diabetic patients (OR=0.991, P=0.015). As a classification variable, TIR≥70% was significantly associated with in-hospital death (OR=0.581, P=0.003). Specifically, TIR≥70% was a protective factor for the in-hospital death of severely ill non-diabetic patients. The TIR of severely ill diabetic patients was not significantly associated with in-hospital death; however, glycemic variability was significantly and independently associated with in-hospital death (OR=1.042, P=0.027). Binary logistic regression analysis of comprehensive indices showed that for non-diabetic patients, the C3 index (low TIR & high CV) was a risk factor for increased mortality (OR=1.642, P<0.001). In addition, for diabetic patients, the C3 index was an independent risk factor for death (OR=1.994, P=0.008), and the C4 index (low TIR & low CV) was independently associated with increased survival. CONCLUSIONS: The TIR of non-diabetic patients during ICU hospitalization was associated with in-hospital death even after adjusting for disease severity and glycemic variability. There was no significant association between the TIR and mortality of diabetic patients. However, for both diabetic and non-diabetic critically ill patients, the combined effect of high TIR and low CV was significantly associated with ICU mortality. Diabetic patients seem to have higher blood glucose fluctuations and can tolerate a large TIR range. Both diabetic and non-diabetic critically ill patients should maintain blood glucose levels within the target range to reduce mortality.Keywords: severe disease, diabetes, blood glucose control, time in targeted blood glucose range, glycemic variability, mortality
Procedia PDF Downloads 2229618 Life Prediction Method of Lithium-Ion Battery Based on Grey Support Vector Machines
Authors: Xiaogang Li, Jieqiong Miao
Abstract:
As for the problem of the grey forecasting model prediction accuracy is low, an improved grey prediction model is put forward. Firstly, use trigonometric function transform the original data sequence in order to improve the smoothness of data , this model called SGM( smoothness of grey prediction model), then combine the improved grey model with support vector machine , and put forward the grey support vector machine model (SGM - SVM).Before the establishment of the model, we use trigonometric functions and accumulation generation operation preprocessing data in order to enhance the smoothness of the data and weaken the randomness of the data, then use support vector machine (SVM) to establish a prediction model for pre-processed data and select model parameters using genetic algorithms to obtain the optimum value of the global search. Finally, restore data through the "regressive generate" operation to get forecasting data. In order to prove that the SGM-SVM model is superior to other models, we select the battery life data from calce. The presented model is used to predict life of battery and the predicted result was compared with that of grey model and support vector machines.For a more intuitive comparison of the three models, this paper presents root mean square error of this three different models .The results show that the effect of grey support vector machine (SGM-SVM) to predict life is optimal, and the root mean square error is only 3.18%. Keywords: grey forecasting model, trigonometric function, support vector machine, genetic algorithms, root mean square errorKeywords: Grey prediction model, trigonometric functions, support vector machines, genetic algorithms, root mean square error
Procedia PDF Downloads 4619617 A Study on the New Weapon Requirements Analytics Using Simulations and Big Data
Authors: Won Il Jung, Gene Lee, Luis Rabelo
Abstract:
Since many weapon systems are getting more complex and diverse, various problems occur in terms of the acquisition cost, time, and performance limitation. As a matter of fact, the experiment execution in real world is costly, dangerous, and time-consuming to obtain Required Operational Characteristics (ROC) for a new weapon acquisition although enhancing the fidelity of experiment results. Also, until presently most of the research contained a large amount of assumptions so therefore a bias is present in the experiment results. At this moment, the new methodology is proposed to solve these problems without a variety of assumptions. ROC of the new weapon system is developed through the new methodology, which is a way to analyze big data generated by simulating various scenarios based on virtual and constructive models which are involving 6 Degrees of Freedom (6DoF). The new methodology enables us to identify unbiased ROC on new weapons by reducing assumptions and provide support in terms of the optimal weapon systems acquisition.Keywords: big data, required operational characteristics (ROC), virtual and constructive models, weapon acquisition
Procedia PDF Downloads 2899616 The Impact of Social Support on Anxiety and Depression under the Context of COVID-19 Pandemic: A Scoping Review and Meta-Analysis
Authors: Meng Wu, Atif Rahman, Eng Gee, Lim, Jeong Jin Yu, Rong Yan
Abstract:
Context: The COVID-19 pandemic has had a profound impact on mental health, with increased rates of anxiety and depression observed. Social support, a critical factor in mental well-being, has also undergone significant changes during the pandemic. This study aims to explore the relationship between social support, anxiety, and depression during COVID-19, taking into account various demographic and contextual factors. Research Aim: The main objective of this study is to conduct a comprehensive systematic review and meta-analysis to examine the impact of social support on anxiety and depression during the COVID-19 pandemic. The study aims to determine the consistency of these relationships across different age groups, occupations, regions, and research paradigms. Methodology: A scoping review and meta-analytic approach were employed in this study. A search was conducted across six databases from 2020 to 2022 to identify relevant studies. The selected studies were then subjected to random effects models, with pooled correlations (r and ρ) estimated. Homogeneity was assessed using Q and I² tests. Subgroup analyses were conducted to explore variations across different demographic and contextual factors. Findings: The meta-analysis of both cross-sectional and longitudinal studies revealed significant correlations between social support, anxiety, and depression during COVID-19. The pooled correlations (ρ) indicated a negative relationship between social support and anxiety (ρ = -0.30, 95% CI = [-0.333, -0.255]) as well as depression (ρ = -0.27, 95% CI = [-0.370, -0.281]). However, further investigation is required to validate these results across different age groups, occupations, and regions. Theoretical Importance: This study emphasizes the multifaceted role of social support in mental health during the COVID-19 pandemic. It highlights the need to reevaluate and expand our understanding of social support's impact on anxiety and depression. The findings contribute to the existing literature by shedding light on the associations and complexities involved in these relationships. Data Collection and Analysis Procedures: The data collection involved an extensive search across six databases to identify relevant studies. The selected studies were then subjected to rigorous analysis using random effects models and subgroup analyses. Pooled correlations were estimated, and homogeneity was assessed using Q and I² tests. Question Addressed: This study aimed to address the question of the impact of social support on anxiety and depression during the COVID-19 pandemic. It sought to determine the consistency of these relationships across different demographic and contextual factors. Conclusion: The findings of this study highlight the significant association between social support, anxiety, and depression during the COVID-19 pandemic. However, further research is needed to validate these findings across different age groups, occupations, and regions. The study emphasizes the need for a comprehensive understanding of social support's multifaceted role in mental health and the importance of considering various contextual and demographic factors in future investigations.Keywords: social support, anxiety, depression, COVID-19, meta-analysis
Procedia PDF Downloads 629615 Estimation of Implicit Colebrook White Equation by Preferable Explicit Approximations in the Practical Turbulent Pipe Flow
Authors: Itissam Abuiziah
Abstract:
In several hydraulic systems, it is necessary to calculate the head losses which depend on the resistance flow friction factor in Darcy equation. Computing the resistance friction is based on implicit Colebrook-White equation which is considered as the standard for the friction calculation, but it needs high computational cost, therefore; several explicit approximation methods are used for solving an implicit equation to overcome this issue. It follows that the relative error is used to determine the most accurate method among the approximated used ones. Steel, cast iron and polyethylene pipe materials investigated with practical diameters ranged from 0.1m to 2.5m and velocities between 0.6m/s to 3m/s. In short, the results obtained show that the suitable method for some cases may not be accurate for other cases. For example, when using steel pipe materials, Zigrang and Silvester's method has revealed as the most precise in terms of low velocities 0.6 m/s to 1.3m/s. Comparatively, Halland method showed a less relative error with the gradual increase in velocity. Accordingly, the simulation results of this study might be employed by the hydraulic engineers, so they can take advantage to decide which is the most applicable method according to their practical pipe system expectations.Keywords: Colebrook–White, explicit equation, friction factor, hydraulic resistance, implicit equation, Reynolds numbers
Procedia PDF Downloads 1889614 Factor Associated with Smoking Cessation among Pregnant Woman: A Systematic Review
Authors: Galila Aisyah Latif Amini, Husnul Khatimah, Citra Amelia
Abstract:
Smoking among women is of particular concern for the maternal and child health community due to the strong association between prenatal smoking and adverse birth outcomes. Pregnancy is perceived to be a unique reason for smoking cessation, as motivation to care for the unborn fetus. This study aimed to find out the determinants of smoking cessation among pregnant women. Method that we use in this study is systematic review. We identified relevant studies by searching on science database online through SAGE journals, Proquest, Scopus, Emerald, JSTOR, and Springerlink. Journals were screened by title and abstract according to the research topic then filtered using the criteria exclusion and inclusion. And then we did critical appraisal. The results of the four studies reviewed were found that the determinant of smoking cessation are parity, the level of education, socioeconomic status, household SHS exposure, smoking habits of both parents, partner smoking status, psychological factors, antenatal care, intervention for health care provider, age smoking duration. The factor most strongly associated with smoking cessation is parity (OR 2,55; Cl 2,34-2,77). The results of this study are expected to give advice for developing future smoking cessation and relapse prevention programs.Keywords: pregnancy, smoking cessation, tobacco use cessation, smoking
Procedia PDF Downloads 2439613 Two-Sided Information Dissemination in Takeovers: Disclosure and Media
Authors: Eda Orhun
Abstract:
Purpose: This paper analyzes a target firm’s decision to voluntarily disclose information during a takeover event and the effect of such disclosures on the outcome of the takeover. Such voluntary disclosures especially in the form of earnings forecasts made around takeover events may affect shareholders’ decisions about the target firm’s value and in return takeover result. This study aims to shed light on this question. Design/methodology/approach: The paper tries to understand the role of voluntary disclosures by target firms during a takeover event in the likelihood of takeover success both theoretically and empirically. A game-theoretical model is set up to analyze the voluntary disclosure decision of a target firm to inform the shareholders about its real worth. The empirical implication of model is tested by employing binary outcome models where the disclosure variable is obtained by identifying the target firms in the sample that provide positive news by issuing increasing management earnings forecasts. Findings: The model predicts that a voluntary disclosure of positive information by the target decreases the likelihood that the takeover succeeds. The empirical analysis confirms this prediction by showing that positive earnings forecasts by target firms during takeover events increase the probability of takeover failure. Overall, it is shown that information dissemination through voluntary disclosures by target firms is an important factor affecting takeover outcomes. Originality/Value: This study is the first to the author's knowledge that studies the impact of voluntary disclosures by the target firm during a takeover event on the likelihood of takeover success. The results contribute to information economics, corporate finance and M&As literatures.Keywords: takeovers, target firm, voluntary disclosures, earnings forecasts, takeover success
Procedia PDF Downloads 3189612 Gravitational Frequency Shifts for Photons and Particles
Authors: Jing-Gang Xie
Abstract:
The research, in this case, considers the integration of the Quantum Field Theory and the General Relativity Theory. As two successful models in explaining behaviors of particles, they are incompatible since they work at different masses and scales of energy, with the evidence that regards the description of black holes and universe formation. It is so considering previous efforts in merging the two theories, including the likes of the String Theory, Quantum Gravity models, and others. In a bid to prove an actionable experiment, the paper’s approach starts with the derivations of the existing theories at present. It goes on to test the derivations by applying the same initial assumptions, coupled with several deviations. The resulting equations get similar results to those of classical Newton model, quantum mechanics, and general relativity as long as conditions are normal. However, outcomes are different when conditions are extreme, specifically with no breakdowns even for less than Schwarzschild radius, or at Planck length cases. Even so, it proves the possibilities of integrating the two theories.Keywords: general relativity theory, particles, photons, Quantum Gravity Model, gravitational frequency shift
Procedia PDF Downloads 3599611 Audio-Visual Co-Data Processing Pipeline
Authors: Rita Chattopadhyay, Vivek Anand Thoutam
Abstract:
Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech
Procedia PDF Downloads 809610 Promoting Biofuels in India: Assessing Land Use Shifts Using Econometric Acreage Response Models
Authors: Y. Bhatt, N. Ghosh, N. Tiwari
Abstract:
Acreage response function are modeled taking account of expected harvest prices, weather related variables and other non-price variables allowing for partial adjustment possibility. At the outset, based on the literature on price expectation formation, we explored suitable formulations for estimating the farmer’s expected prices. Assuming that farmers form expectations rationally, the prices of food and biofuel crops are modeled using time-series methods for possible ARCH/GARCH effects to account for volatility. The prices projected on the basis of the models are then inserted to proxy for the expected prices in the acreage response functions. Food crop acreages in different growing states are found sensitive to their prices relative to those of one or more of the biofuel crops considered. The required percentage improvement in food crop yields is worked to offset the acreage loss.Keywords: acreage response function, biofuel, food security, sustainable development
Procedia PDF Downloads 3019609 Seismic Loss Assessment for Peruvian University Buildings with Simulated Fragility Functions
Authors: Jose Ruiz, Jose Velasquez, Holger Lovon
Abstract:
Peruvian university buildings are critical structures for which very little research about its seismic vulnerability is available. This paper develops a probabilistic methodology that predicts seismic loss for university buildings with simulated fragility functions. Two university buildings located in the city of Cusco were analyzed. Fragility functions were developed considering seismic and structural parameters uncertainty. The fragility functions were generated with the Latin Hypercube technique, an improved Montecarlo-based method, which optimizes the sampling of structural parameters and provides at least 100 reliable samples for every level of seismic demand. Concrete compressive strength, maximum concrete strain and yield stress of the reinforcing steel were considered as the key structural parameters. The seismic demand is defined by synthetic records which are compatible with the elastic Peruvian design spectrum. Acceleration records are scaled based on the peak ground acceleration on rigid soil (PGA) which goes from 0.05g to 1.00g. A total of 2000 structural models were considered to account for both structural and seismic variability. These functions represent the overall building behavior because they give rational information regarding damage ratios for defined levels of seismic demand. The university buildings show an expected Mean Damage Factor of 8.80% and 19.05%, respectively, for the 0.22g-PGA scenario, which was amplified by the soil type coefficient and resulted in 0.26g-PGA. These ratios were computed considering a seismic demand related to 10% of probability of exceedance in 50 years which is a requirement in the Peruvian seismic code. These results show an acceptable seismic performance for both buildings.Keywords: fragility functions, university buildings, loss assessment, Montecarlo simulation, latin hypercube
Procedia PDF Downloads 1449608 Evaluating Forecasting Strategies for Day-Ahead Electricity Prices: Insights From the Russia-Ukraine Crisis
Authors: Alexandra Papagianni, George Filis, Panagiotis Papadopoulos
Abstract:
The liberalization of the energy market and the increasing penetration of fluctuating renewables (e.g., wind and solar power) have heightened the importance of the spot market for ensuring efficient electricity supply. This is further emphasized by the EU’s goal of achieving net-zero emissions by 2050. The day-ahead market (DAM) plays a key role in European energy trading, accounting for 80-90% of spot transactions and providing critical insights for next-day pricing. Therefore, short-term electricity price forecasting (EPF) within the DAM is crucial for market participants to make informed decisions and improve their market positioning. Existing literature highlights out-of-sample performance as a key factor in assessing EPF accuracy, with influencing factors such as predictors, forecast horizon, model selection, and strategy. Several studies indicate that electricity demand is a primary price determinant, while renewable energy sources (RES) like wind and solar significantly impact price dynamics, often lowering prices. Additionally, incorporating data from neighboring countries, due to market coupling, further improves forecast accuracy. Most studies predict up to 24 steps ahead using hourly data, while some extend forecasts using higher-frequency data (e.g., half-hourly or quarter-hourly). Short-term EPF methods fall into two main categories: statistical and computational intelligence (CI) methods, with hybrid models combining both. While many studies use advanced statistical methods, particularly through different versions of traditional AR-type models, others apply computational techniques such as artificial neural networks (ANNs) and support vector machines (SVMs). Recent research combines multiple methods to enhance forecasting performance. Despite extensive research on EPF accuracy, a gap remains in understanding how forecasting strategy affects prediction outcomes. While iterated strategies are commonly used, they are often chosen without justification. This paper contributes by examining whether the choice of forecasting strategy impacts the quality of day-ahead price predictions, especially for multi-step forecasts. We evaluate both iterated and direct methods, exploring alternative ways of conducting iterated forecasts on benchmark and state-of-the-art forecasting frameworks. The goal is to assess whether these factors should be considered by end-users to improve forecast quality. We focus on the Greek DAM using data from July 1, 2021, to March 31, 2022. This period is chosen due to significant price volatility in Greece, driven by its dependence on natural gas and limited interconnection capacity with larger European grids. The analysis covers two phases: pre-conflict (January 1, 2022, to February 23, 2022) and post-conflict (February 24, 2022, to March 31, 2022), following the Russian-Ukraine conflict that initiated an energy crisis. We use the mean absolute percentage error (MAPE) and symmetric mean absolute percentage error (sMAPE) for evaluation, as well as the Direction of Change (DoC) measure to assess the accuracy of price movement predictions. Our findings suggest that forecasters need to apply all strategies across different horizons and models. Different strategies may be required for different horizons to optimize both accuracy and directional predictions, ensuring more reliable forecasts.Keywords: short-term electricity price forecast, forecast strategies, forecast horizons, recursive strategy, direct strategy
Procedia PDF Downloads 89607 The Use of Empirical Models to Estimate Soil Erosion in Arid Ecosystems and the Importance of Native Vegetation
Authors: Meshal M. Abdullah, Rusty A. Feagin, Layla Musawi
Abstract:
When humans mismanage arid landscapes, soil erosion can become a primary mechanism that leads to desertification. This study focuses on applying soil erosion models to a disturbed landscape in Umm Nigga, Kuwait, and identifying its predicted change under restoration plans, The northern portion of Umm Nigga, containing both coastal and desert ecosystems, falls within the boundaries of the Demilitarized Zone (DMZ) adjacent to Iraq, and has been fenced off to restrict public access since 1994. The central objective of this project was to utilize GIS and remote sensing to compare the MPSIAC (Modified Pacific South West Inter Agency Committee), EMP (Erosion Potential Method), and USLE (Universal Soil Loss Equation) soil erosion models and determine their applicability for arid regions such as Kuwait. Spatial analysis was used to develop the necessary datasets for factors such as soil characteristics, vegetation cover, runoff, climate, and topography. Results showed that the MPSIAC and EMP models produced a similar spatial distribution of erosion, though the MPSIAC had more variability. For the MPSIAC model, approximately 45% of the land surface ranged from moderate to high soil loss, while 35% ranged from moderate to high for the EMP model. The USLE model had contrasting results and a different spatial distribution of the soil loss, with 25% of area ranging from moderate to high erosion, and 75% ranging from low to very low. We concluded that MPSIAC and EMP were the most suitable models for arid regions in general, with the MPSIAC model best. We then applied the MPSIAC model to identify the amount of soil loss between coastal and desert areas, and fenced and unfenced sites. In the desert area, soil loss was different between fenced and unfenced sites. In these desert fenced sites, 88% of the surface was covered with vegetation and soil loss was very low, while at the desert unfenced sites it was 3% and correspondingly higher. In the coastal areas, the amount of soil loss was nearly similar between fenced and unfenced sites. These results implied that vegetation cover played an important role in reducing soil erosion, and that fencing is much more important in the desert ecosystems to protect against overgrazing. When applying the MPSIAC model predictively, we found that vegetation cover could be increased from 3% to 37% in unfenced areas, and soil erosion could then decrease by 39%. We conclude that the MPSIAC model is best to predict soil erosion for arid regions such as Kuwait.Keywords: soil erosion, GIS, modified pacific South west inter agency committee model (MPSIAC), erosion potential method (EMP), Universal soil loss equation (USLE)
Procedia PDF Downloads 2979606 Removal of Heavy Metal from Wastewater using Bio-Adsorbent
Authors: Rakesh Namdeti
Abstract:
The liquid waste-wastewater- is essentially the water supply of the community after it has been used in a variety of applications. In recent years, heavy metal concentrations, besides other pollutants, have increased to reach dangerous levels for the living environment in many regions. Among the heavy metals, Lead has the most damaging effects on human health. It can enter the human body through the uptake of food (65%), water (20%), and air (15%). In this background, certain low-cost and easily available biosorbent was used and reported in this study. The scope of the present study is to remove Lead from its aqueous solution using Olea EuropaeaResin as biosorbent. The results showed that the biosorption capacity of Olea EuropaeaResin biosorbent was more for Lead removal. The Langmuir, Freundlich, Tempkin, and Dubinin-Radushkevich (D-R) models were used to describe the biosorption equilibrium of Lead Olea EuropaeaResin biosorbent, and the biosorption followed the Langmuir isotherm. The kinetic models showed that the pseudo-second-order rate expression was found to represent well the biosorption data for the biosorbent.Keywords: novel biosorbent, central composite design, Lead, isotherms, kinetics
Procedia PDF Downloads 789605 Refitting Equations for Peak Ground Acceleration in Light of the PF-L Database
Authors: Matevž Breška, Iztok Peruš, Vlado Stankovski
Abstract:
Systematic overview of existing Ground Motion Prediction Equations (GMPEs) has been published by Douglas. The number of earthquake recordings that have been used for fitting these equations has increased in the past decades. The current PF-L database contains 3550 recordings. Since the GMPEs frequently model the peak ground acceleration (PGA) the goal of the present study was to refit a selection of 44 of the existing equation models for PGA in light of the latest data. The algorithm Levenberg-Marquardt was used for fitting the coefficients of the equations and the results are evaluated both quantitatively by presenting the root mean squared error (RMSE) and qualitatively by drawing graphs of the five best fitted equations. The RMSE was found to be as low as 0.08 for the best equation models. The newly estimated coefficients vary from the values published in the original works.Keywords: Ground Motion Prediction Equations, Levenberg-Marquardt algorithm, refitting PF-L database, peak ground acceleration
Procedia PDF Downloads 4629604 Thermal Behaviors of the Strong Form Factors of Charmonium and Charmed Beauty Mesons from Three Point Sum Rules
Authors: E. Yazıcı, H. Sundu, E. Veli Veliev
Abstract:
In order to understand the nature of strong interactions and QCD vacuum, investigation of the meson coupling constants have an important role. The knowledge on the temperature dependence of the form factors is very important for the interpretation of heavy-ion collision experiments. Also, more accurate determination of these coupling constants plays a crucial role in understanding of the hadronic decays. With the increasing of CM energies of the experiments, researches on meson interactions have become one of the more interesting problems of hadronic physics. In this study, we analyze the temperature dependence of the strong form factor of the BcBcJ/ψ vertex using the three point QCD sum rules method. Here, we assume that with replacing the vacuum condensates and also the continuum threshold by their thermal version, the sum rules for the observables remain valid. In calculations, we take into account the additional operators, which appear in the Wilson expansion at finite temperature. We also investigated the momentum dependence of the form factor at T = 0, fit it into an analytic function, and extrapolate into the deep time-like region in order to obtain a strong coupling constant of the vertex. Our results are consistent with the results existing in the literature.Keywords: QCD sum rules, thermal QCD, heavy mesons, strong coupling constants
Procedia PDF Downloads 1899603 Finite Element Modeling Techniques of Concrete in Steel and Concrete Composite Members
Authors: J. Bartus, J. Odrobinak
Abstract:
The paper presents a nonlinear analysis 3D model of composite steel and concrete beams with web openings using the Finite Element Method (FEM). The core of the study is the introduction of basic modeling techniques comprehending the description of material behavior, appropriate elements selection, and recommendations for overcoming problems with convergence. Results from various finite element models are compared in the study. The main objective is to observe the concrete failure mechanism and its influence on the structural performance of numerical models of the beams at particular load stages. The bearing capacity of beams, corresponding deformations, stresses, strains, and fracture patterns were determined. The results show how load-bearing elements consisting of concrete parts can be analyzed using FEM software with various options to create the most suitable numerical model. The paper demonstrates the versatility of Ansys software usage for structural simulations.Keywords: Ansys, concrete, modeling, steel
Procedia PDF Downloads 1219602 Generalization of Zhou Fixed Point Theorem
Authors: Yu Lu
Abstract:
Fixed point theory is a basic tool for the study of the existence of Nash equilibria in game theory. This paper presents a significant generalization of the Veinott-Zhou fixed point theorem for increasing correspondences, which serves as an essential framework for investigating the existence of Nash equilibria in supermodular and quasisupermodular games. To establish our proofs, we explore different conceptions of multivalued increasingness and provide comprehensive results concerning the existence of the largest/least fixed point. We provide two distinct approaches to the proof, each offering unique insights and advantages. These advancements not only extend the applicability of the Veinott-Zhou theorem to a broader range of economic scenarios but also enhance the theoretical framework for analyzing equilibrium behavior in complex game-theoretic models. Our findings pave the way for future research in the development of more sophisticated models of economic behavior and strategic interaction.Keywords: fixed-point, Tarski’s fixed-point theorem, Nash equilibrium, supermodular game
Procedia PDF Downloads 559601 Statistical Modeling of Mobile Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes
Authors: Jihad S. Daba, J. P. Dubois
Abstract:
Understanding the statistics of non-isotropic scattering multipath channels that fade randomly with respect to time, frequency, and space in a mobile environment is very crucial for the accurate detection of received signals in wireless and cellular communication systems. In this paper, we derive stochastic models for the probability density function (PDF) of the shift in the carrier frequency caused by the Doppler Effect on the received illuminating signal in the presence of a dominant line of sight. Our derivation is based on a generalized Clarke’s and a two-wave partially developed scattering models, where the statistical distribution of the frequency shift is shown to be consistent with the power spectral density of the Doppler shifted signal.Keywords: Doppler shift, filtered Poisson process, generalized Clark’s model, non-isotropic scattering, partially developed scattering, Rician distribution
Procedia PDF Downloads 3729600 Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining
Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride
Abstract:
In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning
Procedia PDF Downloads 1349599 A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting
Authors: Analise Borg, Paul Micallef
Abstract:
Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature.Keywords: audio fingerprinting, mapping algorithm, Gaussian Mixture Models, MFCC, MPEG-7
Procedia PDF Downloads 4229598 Serum Neurotrophins in Different Metabolic Types of Obesity
Authors: Irina M. Kolesnikova, Andrey M. Gaponov, Sergey A. Roumiantsev, Tatiana V. Grigoryeva, Alexander V. Laikov, Alexander V. Shestopalov
Abstract:
Background. Neuropathy is a common complication of obesity. In this regard, the content of neurotrophins in such patients is of particular interest. Neurotrophins are the proteins that regulate neuron survival and neuroplasticity and include brain-derived neurotrophic factor (BDNF) and nerve growth factor (NGF). However, the risk of complications depends on the metabolic type of obesity. Metabolically unhealthy obesity (MUHO) is associated with a high risk of complications, while this is not the case with metabolically healthy obesity (MHO). Therefore, the aim of our work was to study the effect of the obesity metabolic type on serum neurotrophins levels. Patients, materials, methods. The study included 134 healthy donors and 104 obese patients. Depending on the metabolic type of obesity, the obese patients were divided into subgroups with MHO (n=40) and MUHO (n=55). In the blood serum, the concentration of BDNF and NGF was determined. In addition, the content of adipokines (leptin, asprosin, resistin, adiponectin), myokines (irisin, myostatin, osteocrin), indicators of carbohydrate, and lipid metabolism were measured. Correlation analysis revealed the relationship between the studied parameters. Results. We found that serum BDNF concentration was not different between obese patients and healthy donors, regardless of obesity metabolic type. At the same time, in obese patients, there was a decrease in serum NGF level versus control. A similar trend was characteristic of both MHO and MUHO. However, MUHO patients had a higher NGF level than MHO patients. The literature indicates that obesity is associated with an increase in the plasma concentration of NGF. It can be assumed that in obesity, there is a violation of NGF storage in platelets, which accelerates neurotrophin degradation. We found that BDNF concentration correlated with irisin levels in MUHO patients. Healthy donors had a weak association between NGF and VEGF levels. No such association was found in obese patients, but there was an association between NGF and leptin concentrations. In MHO, the concentration of NHF correlated with the content of leptin, irisin, osteocrin, insulin, and the HOMA-IR index. But in MUHO patients, we found only the relationship between NGF and adipokines (leptin, asprosin). It can be assumed that in patients with MHO, the replenishment of serum NGF occurs under the influence of muscle and adipose tissue. In the MUHO patients only the effect of adipose tissue on NGF was observed. Conclusion. Obesity, regardless of metabolic type, is associated with a decrease in serum NGF concentration. We showed that muscle and adipose tissues make a significant contribution to the serum NGF pool in the MHO patients. In MUHO there is no effect of muscle on the NGF level, but the effect of adipose tissue remains.Keywords: neurotrophins, nerve growth factor, NGF, brain-derived neurotrophic factor, BDNF, obesity, metabolically healthy obesity, metabolically unhealthy obesity
Procedia PDF Downloads 1009597 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 1299596 Dynamical Models for Enviromental Effect Depuration for Structural Health Monitoring of Bridges
Authors: Francesco Morgan Bono, Simone Cinquemani
Abstract:
This research aims to enhance bridge monitoring by employing innovative techniques that incorporate exogenous factors into the modeling of sensor signals, thereby improving long-term predictability beyond traditional static methods. Using real datasets from two different bridges equipped with Linear Variable Displacement Transducer (LVDT) sensors, the study investigates the fundamental principles governing sensor behavior for more precise long-term forecasts. Additionally, the research evaluates performance on noisy and synthetically damaged data, proposing a residual-based alarm system to detect anomalies in the bridge. In summary, this novel approach combines advanced modeling, exogenous factors, and anomaly detection to extend prediction horizons and improve preemptive damage recognition, significantly advancing structural health monitoring practices.Keywords: structural health monitoring, dynamic models, sindy, railway bridges
Procedia PDF Downloads 399595 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 689594 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 1089593 A Comparative Analysis of Safety Orientation and Safety Performance in Organizations: A Project Management Perspective
Authors: Dina Alfreahat, Zoltan Sebestyen
Abstract:
Safety is considered as one of the project’s success factors. Poor safety management may result in accidents that impact human, economic, and legal issues. Therefore, it is necessary to consider safety and health as a project success factor along with other project success factors, such as time, cost, and quality. Organizations have a knowledge deficit of the implementation of long-term safety practices, and due to cost control, safety problems tend to receive the least priority. They usually assume that safety management involves expenditures unrelated to production goals, thereby considering it unnecessary for profitability and competitiveness. The purpose of this study is to introduce, analysis and identify the correlation between the orientation of the public safety procedures of an organization and the public safety standards applied in the project. Therefore, the authors develop the process and collect the possible mathematical-statistical tools supporting the previously mentioned goal. The result shows that the adoption of management to safety is a major factor in implementing the safety standard in the project and thereby improving safety performance. It may take time and effort to adopt the mindset of safety orientation service development, but at the same time, the higher organizational investment in safety and health programs will contribute to the loyalty of staff to safety compliance.Keywords: project management perspective, safety orientation, safety performance, safety standards
Procedia PDF Downloads 180