Search results for: Hydrodynamic modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4193

Search results for: Hydrodynamic modeling

1013 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 42
1012 Creative Element Analysis of Machinery Creativity Contest Works

Authors: Chin-Pin, Chen, Shi-Chi, Shiao, Ting-Hao, Lin

Abstract:

Current industry is facing the rapid development of new technology in the world and fierce changes of economic environment in the society so that the industry development trend gradually does not focus on labor, but leads the industry and the academic circle with innovation and creativity. The development trend in machinery industry presents the same situation. Based on the aim of Creativity White Paper, Ministry of Education in Taiwan promotes and develops various creativity contests to cope with the industry trend. Domestic students and enterprises have good performance on domestic and international creativity contests in recent years. There must be important creative elements in such creative works to win the award among so many works. Literature review and in-depth interview with five creativity contest awarded instructors are first proceeded to conclude 15 machinery creative elements, which are further compared with the creative elements of machinery awarded creative works in past five years to understand the relationship between awarded works and creative elements. The statistical analysis results show that IDEA (Industrial Design Excellence Award) contains the most creative elements among four major international creativity contests. That is, most creativity review focuses on creative elements that are comparatively stricter. Concerning the groups participating in creativity contests, enterprises consider more creative elements of the creative works than other two elements for contests. From such contest works, creative elements of “replacement or improvement”, “convenience”, and “modeling” present higher significance. It is expected that the above findings could provide domestic colleges and universities with reference for participating in creativity related contests in the future.

Keywords: machinery, creative elements, creativity contest, creativity works

Procedia PDF Downloads 442
1011 Generalized Linear Modeling of HCV Infection Among Medical Waste Handlers in Sidama Region, Ethiopia

Authors: Birhanu Betela Warssamo

Abstract:

Background: There is limited evidence on the prevalence and risk factors for hepatitis C virus (HCV) infection among waste handlers in the Sidama region, Ethiopia; however, this knowledge is necessary for the effective prevention of HCV infection in the region. Methods: A cross-sectional study was conducted among randomly selected waste collectors from October 2021 to 30 July 2022 in different public hospitals in the Sidama region of Ethiopia. Serum samples were collected from participants and screened for anti-HCV using a rapid immunochromatography assay. Socio-demographic and risk factor information of waste handlers was gathered by pretested and well-structured questionnaires. The generalized linear model (GLM) was conducted using R software, and P-value < 0.05 was declared statistically significant. Results: From a total of 282 participating waste handlers, 16 (5.7%) (95% CI, 4.2 – 8.7) were infected with the hepatitis C virus. The educational status of waste handlers was the significant demographic variable that was associated with the hepatitis C virus (AOR = 0.055; 95% CI = 0.012 – 0.248; P = 0.000). More married waste handlers, 12 (75%), were HCV positive than unmarried, 4 (25%) and married waste handlers were 2.051 times (OR = 2.051, 95%CI = 0.644 –6.527, P = 0.295) more prone to HCV infection, compared to unmarried, which was statistically insignificant. The GLM showed that exposure to blood (OR = 8.26; 95% CI = 1.878–10.925; P = 0.037), multiple sexual partners (AOR = 3.63; 95% CI = 2.751–5.808; P = 0.001), sharp injury (AOR = 2.77; 95% CI = 2.327–3.173; P = 0.036), not using PPE (AOR = 0.77; 95% CI = 0.032–0.937; P = 0.001), contact with jaundiced patient (AOR = 3.65; 95% CI = 1.093–4.368; P = 0 .0048) and unprotected sex (AOR = 11.91; 95% CI = 5.847–16.854; P = 0.001) remained statistically significantly associated with HCV positivity. Conclusions: The study revealed that there was a high prevalence of hepatitis C virus infection among waste handlers in the Sidama region, Ethiopia. This demonstrated that there is an urgent need to increase preventative efforts and strategic policy orientations to control the spread of the hepatitis C virus.

Keywords: Hepatitis C virus, risk factors, waste handlers, prevalence, Sidama Ethiopia

Procedia PDF Downloads 14
1010 Spirometric Reference Values in 236,606 Healthy, Non-Smoking Chinese Aged 4–90 Years

Authors: Jiashu Shen

Abstract:

Objectives: Spirometry is a basic reference for health evaluation which is widely used in clinical. Previous reference of spirometry is not applicable because of drastic changes of social and natural circumstance in China. A new reference values for the spirometry of the Chinese population is extremely needed. Method: Spirometric reference value was established using the statistical modeling method Generalized Additive Models for Location, Scale and Shape for forced expiratory volume in 1 s (FEV1), forced vital capacity (FVC), FEV1/FVC, and maximal mid-expiratory flow (MMEF). Results: Data from 236,606 healthy non-smokers aged 4–90 years was collected from the MJ Health Check database. Spirometry equations for FEV1, FVC, MMEF, and FEV1/FVC were established, including the predicted values and lower limits of normal (LLNs) by sex. The predictive equations that were developed for the spirometric results elaborated the relationship between spirometry and age, and they eliminated the effects of height as a variable. Most previous predictive equations for Chinese spirometry were significantly overestimated (to be exact, with mean differences of 22.21% in FEV1 and 31.39% in FVC for males, along with differences of 26.93% in FEV1 and 35.76% in FVC for females) or underestimated (with mean differences of -5.81% in MMEF and -14.56% in FEV1/FVC for males, along with a difference of -14.54% in FEV1/FVC for females) the results of lung function measurements as found in this study. Through cross-validation, our equations were established as having good fit, and the means of the measured value and the estimated value were compared, with good results. Conclusions: Our study updates the spirometric reference equations for Chinese people of all ages and provides comprehensive values for both physical examination and clinical diagnosis.

Keywords: Chinese, GAMLSS model, reference values, spirometry

Procedia PDF Downloads 136
1009 Method to Find a ε-Optimal Control of Stochastic Differential Equation Driven by a Brownian Motion

Authors: Francys Souza, Alberto Ohashi, Dorival Leao

Abstract:

We present a general solution for finding the ε-optimal controls for non-Markovian stochastic systems as stochastic differential equations driven by Brownian motion, which is a problem recognized as a difficult solution. The contribution appears in the development of mathematical tools to deal with modeling and control of non-Markovian systems, whose applicability in different areas is well known. The methodology used consists to discretize the problem through a random discretization. In this way, we transform an infinite dimensional problem in a finite dimensional, thereafter we use measurable selection arguments, to find a control on an explicit form for the discretized problem. Then, we prove the control found for the discretized problem is a ε-optimal control for the original problem. Our theory provides a concrete description of a rather general class, among the principals, we can highlight financial problems such as portfolio control, hedging, super-hedging, pairs-trading and others. Therefore, our main contribution is the development of a tool to explicitly the ε-optimal control for non-Markovian stochastic systems. The pathwise analysis was made through a random discretization jointly with measurable selection arguments, has provided us with a structure to transform an infinite dimensional problem into a finite dimensional. The theory is applied to stochastic control problems based on path-dependent stochastic differential equations, where both drift and diffusion components are controlled. We are able to explicitly show optimal control with our method.

Keywords: dynamic programming equation, optimal control, stochastic control, stochastic differential equation

Procedia PDF Downloads 188
1008 Numerical Study on Response of Polymer Electrolyte Fuel Cell (PEFCs) with Defects under Different Load Conditions

Authors: Muhammad Faizan Chinannai, Jaeseung Lee, Mohamed Hassan Gundu, Hyunchul Ju

Abstract:

Fuel cell is known to be an effective renewable energy resource which is commercializing in the present era. It is really important to know about the improvement in performance even when the system faces some defects. This study was carried out to analyze the performance of the Polymer electrolyte fuel cell (PEFCs) under different operating conditions such as current density, relative humidity and Pt loadings considering defects with load changes. The purpose of this study is to analyze the response of the fuel cell system with defects in Balance of Plants (BOPs) and catalyst layer (CL) degradation by maintaining the coolant flow rate as such to preserve the cell temperature at the required level. Multi-Scale Simulation of 3D two-phase PEFC model with coolant was carried out under different load conditions. For detailed analysis and performance comparison, extensive contours of temperature, current density, water content, and relative humidity are provided. The simulation results of the different cases are compared with the reference data. Hence the response of the fuel cell stack with defects in BOP and CL degradations can be analyzed by the temperature difference between the coolant outlet and membrane electrode assembly. The results showed that the Failure of the humidifier increases High-Frequency Resistance (HFR), air flow defects and CL degradation results in the non-uniformity of current density distribution and high cathode activation overpotential, respectively.

Keywords: PEM fuel cell, fuel cell modeling, performance analysis, BOP components, current density distribution, degradation

Procedia PDF Downloads 214
1007 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite

Abstract:

Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.

Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination

Procedia PDF Downloads 124
1006 Evaluation of Prestressed Reinforced Concrete Slab Punching Shear Using Finite Element Method

Authors: Zhi Zhang, Liling Cao, Seyedbabak Momenzadeh, Lisa Davey

Abstract:

Reinforced concrete (RC) flat slab-column systems are commonly used in residential or office buildings, as the flat slab provides efficient clearance resulting in more stories at a given height than regular reinforced concrete beam-slab system. Punching shear of slab-column joints is a critical component of two-way reinforced concrete flat slab design. The unbalanced moment at the joint is transferred via slab moment and shear forces. ACI 318 provides an equation to evaluate the punching shear under the design load. It is important to note that the design code considers gravity and environmental load when considering the design load combinations, while it does not consider the effect from differential foundation settlement, which may be a governing load condition for the slab design. This paper describes how prestressed reinforced concrete slab punching shear is evaluated based on ACI 318 provisions and finite element analysis. A prestressed reinforced concrete slab under differential settlements is studied using the finite element modeling methodology. The punching shear check equation is explained. The methodology to extract data for punching shear check from the finite element model is described and correlated with the corresponding code provisions. The study indicates that the finite element analysis results should be carefully reviewed and processed in order to perform accurate punching shear evaluation. Conclusions are made based on the case studies to help engineers understand the punching shear behavior in prestressed and non-prestressed reinforced concrete slabs.

Keywords: differential settlement, finite element model, prestressed reinforced concrete slab, punching shear

Procedia PDF Downloads 130
1005 A Human Centered Design of an Exoskeleton Using Multibody Simulation

Authors: Sebastian Kölbl, Thomas Reitmaier, Mathias Hartmann

Abstract:

Trial and error approaches to adapt wearable support structures to human physiology are time consuming and elaborate. However, during preliminary design, the focus lies on understanding the interaction between exoskeleton and the human body in terms of forces and moments, namely body mechanics. For the study at hand, a multi-body simulation approach has been enhanced to evaluate actual forces and moments in a human dummy model with and without a digital mock-up of an active exoskeleton. Therefore, different motion data have been gathered and processed to perform a musculosceletal analysis. The motion data are ground reaction forces, electromyography data (EMG) and human motion data recorded with a marker-based motion capture system. Based on the experimental data, the response of the human dummy model has been calibrated. Subsequently, the scalable human dummy model, in conjunction with the motion data, is connected with the exoskeleton structure. The results of the human-machine interaction (HMI) simulation platform are in particular resulting contact forces and human joint forces to compare with admissible values with regard to the human physiology. Furthermore, it provides feedback for the sizing of the exoskeleton structure in terms of resulting interface forces (stress justification) and the effect of its compliance. A stepwise approach for the setup and validation of the modeling strategy is presented and the potential for a more time and cost-effective development of wearable support structures is outlined.

Keywords: assistive devices, ergonomic design, inverse dynamics, inverse kinematics, multibody simulation

Procedia PDF Downloads 162
1004 Mental Health Challenges, Internalizing and Externalizing Behavior Problems, and Academic Challenges among Adolescents from Broken Families

Authors: Fadzai Munyuki

Abstract:

Parental divorce is one of youth's most stressful life events and is associated with long-lasting emotional and behavioral problems. Over the last few decades, research has consistently found strong associations between divorce and adverse health effects in adolescents. Parental divorce has been hypothesized to lead to psychosocial development problems, mental health challenges, internalizing and externalizing behavior problems, and low academic performance among adolescents. This is supported by the Positive youth development theory, which states that a family setup has a major role to play in adolescent development and well-being. So, the focus of this research will be to test this hypothesized process model among adolescents in five provinces in Zimbabwe. A cross-sectional study will be conducted to test this hypothesis, and 1840 (n = 1840) adolescents aged between 14 to 17 will be employed for this study. A Stress and Questionnaire scale, a Child behavior checklist scale, and an academic concept scale will be used for this study. Data analysis will be done using Structural Equations Modeling. This study has many limitations, including the lack of a 'real-time' study, a few cross-sectional studies, a lack of a thorough and validated population measure, and many studies that have been done that have focused on one variable in relation to parental divorce. Therefore, this study seeks to bridge this gap between past research and current literature by using a validated population measure, a real-time study, and combining three latent variables in this study.

Keywords: mental health, internalizing and externalizing behavior, divorce, academic achievements

Procedia PDF Downloads 77
1003 Non-Linear Assessment of Chromatographic Lipophilicity of Selected Steroid Derivatives

Authors: Milica Karadžić, Lidija Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Anamarija Mandić, Aleksandar Oklješa, Andrea Nikolić, Marija Sakač, Katarina Penov Gaši

Abstract:

Using chemometric approach, the relationships between the chromatographic lipophilicity and in silico molecular descriptors for twenty-nine selected steroid derivatives were studied. The chromatographic lipophilicity was predicted using artificial neural networks (ANNs) method. The most important in silico molecular descriptors were selected applying stepwise selection (SS) paired with partial least squares (PLS) method. Molecular descriptors with satisfactory variable importance in projection (VIP) values were selected for ANN modeling. The usefulness of generated models was confirmed by detailed statistical validation. High agreement between experimental and predicted values indicated that obtained models have good quality and high predictive ability. Global sensitivity analysis (GSA) confirmed the importance of each molecular descriptor used as an input variable. High-quality networks indicate a strong non-linear relationship between chromatographic lipophilicity and used in silico molecular descriptors. Applying selected molecular descriptors and generated ANNs the good prediction of chromatographic lipophilicity of the studied steroid derivatives can be obtained. This article is based upon work from COST Actions (CM1306 and CA15222), supported by COST (European Cooperation and Science and Technology).

Keywords: artificial neural networks, chemometrics, global sensitivity analysis, liquid chromatography, steroids

Procedia PDF Downloads 345
1002 Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques

Authors: Subham Kharel, Sudha Ravindranath, A. Vidya, B. Chandrasekaran, K. Ganesha Raj, T. Shesadri

Abstract:

Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings.

Keywords: object oriented classification, shadow extraction, high-rise buildings, satellite imagery, spatial technology

Procedia PDF Downloads 155
1001 Sustainable Practices through Organizational Internal Factors among South African Construction Firms

Authors: Oluremi I. Bamgbade, Oluwayomi Babatunde

Abstract:

Governments and nonprofits have been in the support of sustainability as the goal of businesses especially in the construction industry because of its considerable impacts on the environment, economy, and society. However, to measure the degree to which an organisation is being sustainable or pursuing sustainable growth can be difficult as a result of the clear sustainability strategy required to assume their commitment to the goal and competitive advantage. This research investigated the influence of organisational culture and organisational structure in achieving sustainable construction among South African construction firms. A total of 132 consultants from the nine provinces in South Africa participated in the survey. The data collected were initially screened using SPSS (version 21) while Partial Least Squares Structural Equation Modeling (PLS-SEM) algorithm and bootstrap techniques were employed to test the hypothesised paths. The empirical evidence also supported the hypothesised direct effects of organisational culture and organisational structure on sustainable construction. Similarly, the result regarding the relationship between organisational culture and organisational structure was supported. Therefore, construction industry can record a considerable level of construction sustainability and establish suitable cultures and structures within the construction organisations. Drawing upon organisational control theory, these findings supported the view that these organisational internal factors have a strong contingent effect on sustainability adoption in construction project execution. The paper makes theoretical, practical and methodological contributions within the domain of sustainable construction especially in the context of South Africa. Some limitations of the study are indicated, suggesting opportunities for future research.

Keywords: organisational culture, organisational structure, South African construction firms, sustainable construction

Procedia PDF Downloads 288
1000 A Study on Shavadoon Underground Living Space in Dezful and Shooshtar Cities, Southwest of Iran: As a Sample of Sustainable Vernacular Architecture

Authors: Haniyeh Okhovat, Mahmood Hosseini, Omid Kaveh Ahangari, Mona Zaryoun

Abstract:

Shavadoon is a type of underground living space, formerly used in urban residences of Dezful and Shooshtar cities in southwestern Iran. In spite of their high efficiency in creating cool spaces for hot summers of that area, Shavadoons were abandoned, like many other components of vernacular architecture, as a result of the modernism movement. However, Shavadoons were used by the local people as shelters during the 8-year Iran-Iraq war, and although several cases of bombardment happened during those years, no case of damage was reported in those two cities. On this basis, and regarding the high seismicity of Iran, the use of Shavadoons as post-disasters shelters can be considered as a good issue for research. This paper presents the results of a thorough study conducted on these spaces and their seismic behavior. First, the architectural aspects of Shavadoon and their construction technique are presented. Then, the results of seismic evaluation of a sample Shavadoon, conducted by a series of time history analyses, using Plaxis software and a set of selected earthquakes, are briefly explained. These results show that Shavadoons have good stability against seismic excitations. This stability is mainly because of the high strength of conglomerate materials inside which the Shavadoons have been excavated. On this basis, and considering other merits of this components of vernacular architecture in southwest of Iran, it is recommended that the revival of these components is seriously reconsidered by both architects and civil engineers.

Keywords: Shavadoon, Iran high seismicity, Conglomerate, Modeling in Plaxis, Vernacular sustainable architecture

Procedia PDF Downloads 304
999 Interpretation and Prediction of Geotechnical Soil Parameters Using Ensemble Machine Learning

Authors: Goudjil kamel, Boukhatem Ghania, Jlailia Djihene

Abstract:

This paper delves into the development of a sophisticated desktop application designed to calculate soil bearing capacity and predict limit pressure. Drawing from an extensive review of existing methodologies, the study meticulously examines various approaches employed in soil bearing capacity calculations, elucidating their theoretical foundations and practical applications. Furthermore, the study explores the burgeoning intersection of artificial intelligence (AI) and geotechnical engineering, underscoring the transformative potential of AI- driven solutions in enhancing predictive accuracy and efficiency.Central to the research is the utilization of cutting-edge machine learning techniques, including Artificial Neural Networks (ANN), XGBoost, and Random Forest, for predictive modeling. Through comprehensive experimentation and rigorous analysis, the efficacy and performance of each method are rigorously evaluated, with XGBoost emerging as the preeminent algorithm, showcasing superior predictive capabilities compared to its counterparts. The study culminates in a nuanced understanding of the intricate dynamics at play in geotechnical analysis, offering valuable insights into optimizing soil bearing capacity calculations and limit pressure predictions. By harnessing the power of advanced computational techniques and AI-driven algorithms, the paper presents a paradigm shift in the realm of geotechnical engineering, promising enhanced precision and reliability in civil engineering projects.

Keywords: limit pressure of soil, xgboost, random forest, bearing capacity

Procedia PDF Downloads 22
998 Air Dispersion Model for Prediction Fugitive Landfill Gaseous Emission Impact in Ambient Atmosphere

Authors: Moustafa Osman Mohammed

Abstract:

This paper will explore formation of HCl aerosol at atmospheric boundary layers and encourages the uptake of environmental modeling systems (EMSs) as a practice evaluation of gaseous emissions (“framework measures”) from small and medium-sized enterprises (SMEs). The conceptual model predicts greenhouse gas emissions to ecological points beyond landfill site operations. It focuses on incorporation traditional knowledge into baseline information for both measurement data and the mathematical results, regarding parameters influence model variable inputs. The paper has simplified parameters of aerosol processes based on the more complex aerosol process computations. The simple model can be implemented to both Gaussian and Eulerian rural dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds is taken into account photochemical formulation with exposure effects according to HCl concentrations as starting point of risk assessment. The discussion set out distinctly aspect of sustainability in reflection inputs, outputs, and modes of impact on the environment. Thereby, models incorporate abiotic and biotic species to broaden the scope of integration for both quantification impact and assessment risks. The later environmental obligations suggest either a recommendation or a decision of what is a legislative should be achieved for mitigation measures of landfill gas (LFG) ultimately.

Keywords: air pollution, landfill emission, environmental management, monitoring/methods and impact assessment

Procedia PDF Downloads 324
997 Focus-Latent Dirichlet Allocation for Aspect-Level Opinion Mining

Authors: Mohsen Farhadloo, Majid Farhadloo

Abstract:

Aspect-level opinion mining that aims at discovering aspects (aspect identification) and their corresponding ratings (sentiment identification) from customer reviews have increasingly attracted attention of researchers and practitioners as it provides valuable insights about products/services from customer's points of view. Instead of addressing aspect identification and sentiment identification in two separate steps, it is possible to simultaneously identify both aspects and sentiments. In recent years many graphical models based on Latent Dirichlet Allocation (LDA) have been proposed to solve both aspect and sentiment identifications in a single step. Although LDA models have been effective tools for the statistical analysis of document collections, they also have shortcomings in addressing some unique characteristics of opinion mining. Our goal in this paper is to address one of the limitations of topic models to date; that is, they fail to directly model the associations among topics. Indeed in many text corpora, it is natural to expect that subsets of the latent topics have higher probabilities. We propose a probabilistic graphical model called focus-LDA, to better capture the associations among topics when applied to aspect-level opinion mining. Our experiments on real-life data sets demonstrate the improved effectiveness of the focus-LDA model in terms of the accuracy of the predictive distributions over held out documents. Furthermore, we demonstrate qualitatively that the focus-LDA topic model provides a natural way of visualizing and exploring unstructured collection of textual data.

Keywords: aspect-level opinion mining, document modeling, Latent Dirichlet Allocation, LDA, sentiment analysis

Procedia PDF Downloads 94
996 Human Immune Response to Surgery: The Surrogate Prediction of Postoperative Outcomes

Authors: Husham Bayazed

Abstract:

Immune responses following surgical trauma play a pivotal role in predicting postoperative outcomes from healing and recovery to postoperative complications. Postoperative complications, including infections and protracted recovery, occur in a significant number of about 300 million surgeries performed annually worldwide. Complications cause personal suffering along with a significant economic burden on the healthcare system in any community. The accurate prediction of postoperative complications and patient-targeted interventions for their prevention remain major clinical provocations. Recent Findings: Recent studies are focusing on immune dysregulation mechanisms that occur in response to surgical trauma as a key determinant of postoperative complications. Antecedent studies mainly were plunging into the detection of inflammatory plasma markers, which facilitate in providing important clues regarding their pathogenesis. However, recent Single-cell technologies, such as mass cytometry or single-cell RNA sequencing, have markedly enhanced our ability to understand the immunological basis of postoperative immunological trauma complications and to identify their prognostic biological signatures. Summary: The advent of proteomic technologies has significantly advanced our ability to predict the risk of postoperative complications. Multiomic modeling of patients' immune states holds promise for the discovery of preoperative predictive biomarkers and providing patients and surgeons with information to improve surgical outcomes. However, more studies are required to accurately predict the risk of postoperative complications in individual patients.

Keywords: immune dysregulation, postoperative complications, surgical trauma, flow cytometry

Procedia PDF Downloads 86
995 Cryptographic Resource Allocation Algorithm Based on Deep Reinforcement Learning

Authors: Xu Jie

Abstract:

As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decision-making problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security) by modeling the multi-job collaborative cryptographic service scheduling problem as a multi-objective optimized job flow scheduling problem and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real-time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing and effectively solves the problem of complex resource scheduling in cryptographic services.

Keywords: cloud computing, cryptography on-demand service, reinforcement learning, workflow scheduling

Procedia PDF Downloads 15
994 Working Memory Growth from Kindergarten to First Grade: Considering Impulsivity, Parental Discipline Methods and Socioeconomic Status

Authors: Ayse Cobanoglu

Abstract:

Working memory can be defined as a workspace that holds and regulates active information in mind. This study investigates individual changes in children's working memory from kindergarten to first grade. The main purpose of the study is whether parental discipline methods and child impulsive/overactive behaviors affect children's working memory initial status and growth rate, controlling for gender, minority status, and socioeconomic status (SES). A linear growth curve model with the first four waves of the Early Childhood Longitudinal Study-Kindergarten Cohort of 2011 (ECLS-K:2011) is performed to analyze the individual growth of children's working memory longitudinally (N=3915). Results revealed that there is a significant variation among students' initial status in the kindergarten fall semester as well as the growth rate during the first two years of schooling. While minority status, SES, and children's overactive/impulsive behaviors influenced children's initial status, only SES and minority status were significantly associated with the growth rate of working memory. For parental discipline methods, such as giving a warning and ignoring the child's negative behavior, are also negatively associated with initial working memory scores. Following that, students' working memory growth rate is examined, and students with lower SES as well as minorities showed a faster growth pattern during the first two years of schooling. However, the findings of parental disciplinary methods on working memory growth rates were mixed. It can be concluded that schooling helps low-SES minority students to develop their working memory.

Keywords: growth curve modeling, impulsive/overactive behaviors, parenting, working memory

Procedia PDF Downloads 135
993 Application of Seasonal Autoregressive Integrated Moving Average Model for Forecasting Monthly Flows in Waterval River, South Africa

Authors: Kassahun Birhanu Tadesse, Megersa Olumana Dinka

Abstract:

Reliable future river flow information is basic for planning and management of any river systems. For data scarce river system having only a river flow records like the Waterval River, a univariate time series models are appropriate for river flow forecasting. In this study, a univariate Seasonal Autoregressive Integrated Moving Average (SARIMA) model was applied for forecasting Waterval River flow using GRETL statistical software. Mean monthly river flows from 1960 to 2016 were used for modeling. Different unit root tests and Mann-Kendall trend analysis were performed to test the stationarity of the observed flow time series. The time series was differenced to remove the seasonality. Using the correlogram of seasonally differenced time series, different SARIMA models were identified, their parameters were estimated, and diagnostic check-up of model forecasts was performed using white noise and heteroscedasticity tests. Finally, based on minimum Akaike Information (AIc) and Hannan-Quinn (HQc) criteria, SARIMA (3, 0, 2) x (3, 1, 3)12 was selected as the best model for Waterval River flow forecasting. Therefore, this model can be used to generate future river information for water resources development and management in Waterval River system. SARIMA model can also be used for forecasting other similar univariate time series with seasonality characteristics.

Keywords: heteroscedasticity, stationarity test, trend analysis, validation, white noise

Procedia PDF Downloads 205
992 Time Series Simulation by Conditional Generative Adversarial Net

Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto

Abstract:

Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.

Keywords: conditional generative adversarial net, market and credit risk management, neural network, time series

Procedia PDF Downloads 143
991 Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment

Authors: Tasneem Halawani, Yamen Khateeb

Abstract:

With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.

Keywords: automation, customer value, heterogenic, integration, IT services, optimization, processes

Procedia PDF Downloads 107
990 Bluetooth Communication Protocol Study for Multi-Sensor Applications

Authors: Joao Garretto, R. J. Yarwood, Vamsi Borra, Frank Li

Abstract:

Bluetooth Low Energy (BLE) has emerged as one of the main wireless communication technologies used in low-power electronics, such as wearables, beacons, and Internet of Things (IoT) devices. BLE’s energy efficiency characteristic, smart mobiles interoperability, and Over the Air (OTA) capabilities are essential features for ultralow-power devices, which are usually designed with size and cost constraints. Most current research regarding the power analysis of BLE devices focuses on the theoretical aspects of the advertising and scanning cycles, with most results being presented in the form of mathematical models and computer software simulations. Such computer modeling and simulations are important for the comprehension of the technology, but hardware measurement is essential for the understanding of how BLE devices behave in real operation. In addition, recent literature focuses mostly on the BLE technology, leaving possible applications and its analysis out of scope. In this paper, a coin cell battery-powered BLE Data Acquisition Device, with a 4-in-1 sensor and one accelerometer, is proposed and evaluated with respect to its Power Consumption. First, evaluations of the device in advertising mode with the sensors turned off completely, followed by the power analysis when each of the sensors is individually turned on and data is being transmitted, and concluding with the power consumption evaluation when both sensors are on and respectively broadcasting the data to a mobile phone. The results presented in this paper are real-time measurements of the electrical current consumption of the BLE device, where the energy levels that are demonstrated are matched to the BLE behavior and sensor activity.

Keywords: bluetooth low energy, power analysis, BLE advertising cycle, wireless sensor node

Procedia PDF Downloads 91
989 The Advancement of Environmental Impact Assessment for 5th Transmission Natural Gas Pipeline Project in Thailand

Authors: Penrug Pengsombut, Worawut Hamarn, Teerawuth Suwannasri, Kittiphong Songrukkiat, Kanatip Ratanachoo

Abstract:

PTT Public Company Limited or simply PTT has played an important role in strengthening national energy security of the Kingdom of Thailand by transporting natural gas to customers in power, industrial and commercial sectors since 1981. PTT has been constructing and operating natural gas pipeline system of over 4,500-km network length both onshore and offshore laid through different area classifications i.e., marine, forest, agriculture, rural, urban, and city areas. During project development phase, an Environmental Impact Assessment (EIA) is conducted and submitted to the Office of Natural Resources and Environmental Policy and Planning (ONEP) for approval before project construction commencement. Knowledge and experiences gained and revealed from EIA in the past projects definitely are developed to further advance EIA study process for newly 5th Transmission Natural Gas Pipeline Project (5TP) with approximately 415 kilometers length. The preferred pipeline route is selected and justified by SMARTi map, an advance digital one-map platform with consists of multiple layers geographic and environmental information. Sensitive area impact focus (SAIF) is a practicable impact assessment methodology which appropriate for a particular long distance infrastructure project such as 5TP. An environmental modeling simulation is adopted into SAIF methodology for impact quantified in all sensitive areas whereas other area along pipeline right-of-ways is typically assessed as an impact representative. Resulting time and cost deduction is beneficial to project for early start.

Keywords: environmental impact assessment, EIA, natural gas pipeline, sensitive area impact focus, SAIF

Procedia PDF Downloads 408
988 Talent Management, Employee Competency, and Organizational Performance

Authors: Sunyoung Park

Abstract:

Context: Talent management is a strategic approach that has received considerable attention in recent years to improve employee competency and organizational performance in many organizations. The implementation of talent management involves identifying objectives and positions within the organization, developing a pool of high-potential employees, and establishing appropriate HR functions to promote high employee and organizational performance. This study aims to investigate the relationship between talent management, HR functions, employee competency, and organizational performance in the South Korean context. Research Aim: The main objective of this study is to investigate the structural relationships among talent management, human resources (HR) functions, employee competency, and organizational performance. Methodology: To achieve the research aim, this study used a quantitative research method. Specifically, a total of 1,478 responses were analyzed using structural equation modeling based on data obtained from the Human Capital Corporate Panel (HCCP) survey in South Korea. Findings: The study revealed that talent management has a positive influence on HR functions and employee competency. Additionally, HR functions directly affect employee competency and organizational performance. Employee competency was found to be related to organizational performance. Moreover, talent management and HR functions indirectly affect organizational performance through employee competency. Theoretical Importance: This study provides empirical evidence of the relationship between talent management, HR functions, employee competency, and organizational performance in the South Korean context. The findings suggest that organizations should focus on developing appropriate talent management and HR functions to improve employee competency, which, in turn, will lead to better organizational performance. Moreover, the study contributes to the existing literature by emphasizing the importance of the relationship between talent management and HR functions in improving organizational performance.

Keywords: employee competency, HR functions, organizational performance, talent management

Procedia PDF Downloads 96
987 A Comprehensive Study of Spread Models of Wildland Fires

Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.

Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling

Procedia PDF Downloads 81
986 Optimizing Groundwater Pumping for a Complex Groundwater/Surface Water System

Authors: Emery A. Coppola Jr., Suna Cinar, Ferenc Szidarovszky

Abstract:

Over-pumping of groundwater resources is a serious problem world-wide. In addition to depleting this valuable resource, hydraulically connected sensitive ecological resources like wetlands and surface water bodies are often impacted and even destroyed by over-pumping. Effectively managing groundwater in a way that satisfy human demand while preserving natural resources is a daunting challenge that will only worsen with growing human populations and climate change. As presented in this paper, a numerical flow model developed for a hypothetical but realistic groundwater/surface water system was combined with formal optimization. Response coefficients were used in an optimization management model to maximize groundwater pumping in a complex, multi-layered aquifer system while protecting against groundwater over-draft, streamflow depletion, and wetland impacts. Pumping optimization was performed for different constraint sets that reflect different resource protection preferences, yielding significantly different optimal pumping solutions. A sensitivity analysis on the optimal solutions was performed on select response coefficients to identify differences between wet and dry periods. Stochastic optimization was also performed, where uncertainty associated with changing irrigation demand due to changing weather conditions are accounted for. One of the strengths of this optimization approach is that it can efficiently and accurately identify superior management strategies that minimize risk and adverse environmental impacts associated with groundwater pumping under different hydrologic conditions.

Keywords: numerical groundwater flow modeling, water management optimization, groundwater overdraft, streamflow depletion

Procedia PDF Downloads 233
985 Processing and Modeling of High-Resolution Geophysical Data for Archaeological Prospection, Nuri Area, Northern Sudan

Authors: M. Ibrahim Ali, M. El Dawi, M. A. Mohamed Ali

Abstract:

In this study, the use of magnetic gradient survey, and the geoelectrical ground methods used together to explore archaeological features in Nuri’s pyramids area. Research methods used and the procedures and methodologies have taken full right during the study. The magnetic survey method was used to search for archaeological features using (Geoscan Fluxgate Gradiometer (FM36)). The study area was divided into a number of squares (networks) exactly equal (20 * 20 meters). These squares were collected at the end of the study to give a major network for each region. Networks also divided to take the sample using nets typically equal to (0.25 * 0.50 meter), in order to give a more specific archaeological features with some small bipolar anomalies that caused by buildings built from fired bricks. This definition is important to monitor many of the archaeological features such as rooms and others. This main network gives us an integrated map displayed for easy presentation, and it also allows for all the operations required using (Geoscan Geoplot software). The parallel traverse is the main way to take readings of the magnetic survey, to get out the high-quality data. The study area is very rich in old buildings that vary from small to very large. According to the proportion of the sand dunes and the loose soil, most of these buildings are not visible from the surface. Because of the proportion of the sandy dry soil, there is no connection between the ground surface and the electrodes. We tried to get electrical readings by adding salty water to the soil, but, unfortunately, we failed to confirm the magnetic readings with electrical readings as previously planned.

Keywords: archaeological features, independent grids, magnetic gradient, Nuri pyramid

Procedia PDF Downloads 482
984 The Mediating Role of Psychological Factors in the Relationships Between Youth Problematic Internet and Subjective Well-Being

Authors: Dorit Olenik-Shemesh, Tali Heiman

Abstract:

The rapid increase in the massive use of the internet in recent yearshas led to an increase in the prevalence of a phenomenon called 'Problematic Internet use' (PIU), an emerging, growing health problem, especially during adolescents, that poses a challenge for mental health research and practitioners. Problematic Internet use (PIU) is defined as an excessive overuse of the internet, including an inability to control time spent on the internet, cognitivepreoccupation with the Internet, and continued use in spite of the adverse consequences, which may lead to psychological, social, and academic difficulties in one's life and daily functioning. However, little is known about the nature of the nexusbetween PIU and subjective well-being among adolescents. The main purpose of the current study was to explore in depth the network of connections between PIU, sense of well-being, and fourpersonal-emotional factors (resilience, self-control, depressive mood, and loneliness) that may mediate these relationships. A total sample of 433 adolescents, 214 (49.4%) girls and 219 (50.6%) boys between the ages of 12–17 (mean = 14.9, SD = 2.16), completed self-reportquestionnaires relating to the study variables. In line with the hypothesis, analysis of a Structural Equation modeling (SEM) revealed the main following results: high levels of PIU predicted low levels of well-being among adolescents. In addition, low levels of resilience and high levels of depressivemood (together), as well as low levels of self control and high levels of depressivemood (together), as well as low levels of resilience and high levels of loneliness, mediated the relationships between PIU and well-being. In general, girls were found to be higher in PIU and inresilience than boys. The study results revealed specific implications for developing intervention programs for adolescents in the context of PIU; aiming at more balanced adjusted use of the Internet along withpreventingthe decrease in well being.

Keywords: probelmatic inetrent Use, well-being, adolescents, SEM model

Procedia PDF Downloads 168