Search results for: EoS models
4909 The Association between C-Reactive Protein and Hypertension with Different US Participants Ethnicity-Findings from National Health and Nutrition Examination Survey 1999-2010
Authors: Ghada Abo-Zaid
Abstract:
The main objective of this study was to examine the association between the elevated level of CRP and incidence of hypertension before and after adjusting by age, BMI, gender, SES, smoking, diabetes, cholesterol LDL and cholesterol HDL and to determine whether the association were differ by race. Method: Cross sectional data for participations from age 17 to age 74 years who included in The National Health and Nutrition Examination Survey (NHANES) from 1999 to 2010 were analysed. CRP level was classified into three categories ( > 3mg/L, between 1mg/LL and 3mg/L, and < 3 mg/L). Blood pressure categorization was done using JNC 7 algorithm Hypertension defined as either systolic blood pressure (SBP) of 140 mmHg or more and disystolic blood pressure (DBP) of 90mmHg or greater, otherwise a self-reported prior diagnosis by a physician. Pre-hypertension was defined as (139 > SBP > 120 or 89 > DPB > 80). Multinominal regression model was undertaken to measure the association between CRP level and hypertension. Results: In univariable models, CRP concentrations > 3 mg/L were associated with a 73% greater risk of incident hypertension compared with CRP concentrations < 1 mg/L (Hypertension: odds ratio [OR] = 1.73; 95% confidence interval [CI], 1.50-1.99). Ethnic comparisons showed that American Mexican had the highest risk of incident hypertension (odds ratio [OR] = 2.39; 95% confidence interval [CI], 2.21-2.58).This risk was statistically insignificant, however, either after controlling by other variables (Hypertension: OR = 0.75; 95% CI, 0.52-1.08,), or categorized by race [American Mexican: odds ratio [OR] = 1.58; 95% confidence interval [CI], 0,58-4.26, Other Hispanic: odds ratio [OR] = 0.87; 95% confidence interval [CI], 0.19-4.42, Non-Hispanic white: odds ratio [OR] = 0.90; 95% confidence interval [CI], 0.50-1.59, Non-Hispanic Black: odds ratio [OR] = 0.44; 95% confidence interval [CI], 0.22-0,87]. The same results were found for pre-hypertension, and the Non-Hispanic black showed the highest significant risk for Pre-Hypertension (odds ratio [OR] = 1.60; 95% confidence interval [CI], 1.26-2.03). When CRP concentrations were between 1.0-3.0 mg/L, in an unadjusted models prehypertension was associated with higher likelihood of elevated CRP (OR = 1.37; 95% CI, 1.15-1.62). The same relationship was maintained in Non-Hispanic white, Non-Hispanic black, and other race (Non-Hispanic white: OR = 1.24; 95% CI, 1.03-1.48, Non-Hispanic black: OR = 1.60; 95% CI, 1.27-2.03, other race: OR = 2.50; 95% CI, 1.32-4.74) while the association was insignificant with American Mexican and other Hispanic. In the adjusted model, the relationship between CRP and prehypertension were no longer available. In contrary, Hypertension was not independently associated with elevated CRP, and the results were the same after grouped by race or adjusted by the confounder variables. The same results were obtained when SBP or DBP were on a continuous measure. Conclusions: This study confirmed the existence of an association between hypertension, prehypertension and elevated level of CRP, however this association was no longer available after adjusting by other variables. Ethic group differences were statistically significant at the univariable models, while it disappeared after controlling by other variables.Keywords: CRP, hypertension, ethnicity, NHANES, blood pressure
Procedia PDF Downloads 4114908 A Comprehensive Review of Artificial Intelligence Applications in Sustainable Building
Authors: Yazan Al-Kofahi, Jamal Alqawasmi.
Abstract:
In this study, a comprehensive literature review (SLR) was conducted, with the main goal of assessing the existing literature about how artificial intelligence (AI), machine learning (ML), deep learning (DL) models are used in sustainable architecture applications and issues including thermal comfort satisfaction, energy efficiency, cost prediction and many others issues. For this reason, the search strategy was initiated by using different databases, including Scopus, Springer and Google Scholar. The inclusion criteria were used by two research strings related to DL, ML and sustainable architecture. Moreover, the timeframe for the inclusion of the papers was open, even though most of the papers were conducted in the previous four years. As a paper filtration strategy, conferences and books were excluded from database search results. Using these inclusion and exclusion criteria, the search was conducted, and a sample of 59 papers was selected as the final included papers in the analysis. The data extraction phase was basically to extract the needed data from these papers, which were analyzed and correlated. The results of this SLR showed that there are many applications of ML and DL in Sustainable buildings, and that this topic is currently trendy. It was found that most of the papers focused their discussions on addressing Environmental Sustainability issues and factors using machine learning predictive models, with a particular emphasis on the use of Decision Tree algorithms. Moreover, it was found that the Random Forest repressor demonstrates strong performance across all feature selection groups in terms of cost prediction of the building as a machine-learning predictive model.Keywords: machine learning, deep learning, artificial intelligence, sustainable building
Procedia PDF Downloads 654907 The Development of a Comprehensive Sustainable Supply Chain Performance Measurement Theoretical Framework in the Oil Refining Sector
Authors: Dina Tamazin, Nicoleta Tipi, Sahar Validi
Abstract:
The oil refining industry plays vital role in the world economy. Oil refining companies operate in a more complex and dynamic environment than ever before. In addition, oil refining companies and the public are becoming more conscious of crude oil scarcity and climate changes. Hence, sustainability in the oil refining industry is becoming increasingly critical to the industry's long-term viability and to the environmental sustainability. Mainly, it is relevant to the measurement and evaluation of the company's sustainable performance to support the company in understanding their performance and its implication more objectively and establishing sustainability development plans. Consequently, the oil refining companies attempt to re-engineer their supply chain to meet the sustainable goals and standards. On the other hand, this research realized that previous research in oil refining sustainable supply chain performance measurements reveals that there is a lack of studies that consider the integration of sustainability in the supply chain performance measurement practices in the oil refining industry. Therefore, there is a need for research that provides performance guidance, which can be used to measure sustainability and assist in setting sustainable goals for oil refining supply chains. Accordingly, this paper aims to present a comprehensive oil refining sustainable supply chain performance measurement theoretical framework. In development of this theoretical framework, the main characteristics of oil refining industry have been identified. For this purpose, a thorough review of relevant literature on performance measurement models and sustainable supply chain performance measurement models has been conducted. The comprehensive oil refining sustainable supply chain performance measurement theoretical framework introduced in this paper aims to assist oil refining companies in measuring and evaluating their performance from a sustainability aspect to achieve sustainable operational excellence.Keywords: oil refining industry, oil refining sustainable supply chain, performance measurement, sustainability
Procedia PDF Downloads 2864906 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia
Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu
Abstract:
Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models
Procedia PDF Downloads 1954905 Non-Linear Assessment of Chromatographic Lipophilicity of Selected Steroid Derivatives
Authors: Milica Karadžić, Lidija Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Anamarija Mandić, Aleksandar Oklješa, Andrea Nikolić, Marija Sakač, Katarina Penov Gaši
Abstract:
Using chemometric approach, the relationships between the chromatographic lipophilicity and in silico molecular descriptors for twenty-nine selected steroid derivatives were studied. The chromatographic lipophilicity was predicted using artificial neural networks (ANNs) method. The most important in silico molecular descriptors were selected applying stepwise selection (SS) paired with partial least squares (PLS) method. Molecular descriptors with satisfactory variable importance in projection (VIP) values were selected for ANN modeling. The usefulness of generated models was confirmed by detailed statistical validation. High agreement between experimental and predicted values indicated that obtained models have good quality and high predictive ability. Global sensitivity analysis (GSA) confirmed the importance of each molecular descriptor used as an input variable. High-quality networks indicate a strong non-linear relationship between chromatographic lipophilicity and used in silico molecular descriptors. Applying selected molecular descriptors and generated ANNs the good prediction of chromatographic lipophilicity of the studied steroid derivatives can be obtained. This article is based upon work from COST Actions (CM1306 and CA15222), supported by COST (European Cooperation and Science and Technology).Keywords: artificial neural networks, chemometrics, global sensitivity analysis, liquid chromatography, steroids
Procedia PDF Downloads 3444904 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 1484903 Oxidative Stress Related Alteration of Mitochondrial Dynamics in Cellular Models
Authors: Orsolya Horvath, Laszlo Deres, Krisztian Eros, Katalin Ordog, Tamas Habon, Balazs Sumegi, Kalman Toth, Robert Halmosi
Abstract:
Introduction: Oxidative stress induces an imbalance in mitochondrial fusion and fission processes, finally leading to cell death. The two antioxidant molecules, BGP-15 and L2286 have beneficial effects on mitochondrial functions and on cellular oxidative stress response. In this work, we studied the effects of these compounds on the processes of mitochondrial quality control. Methods: We used H9c2 cardiomyoblast and isolated neonatal rat cardiomyocytes (NRCM) for the experiments. The concentration of stressors and antioxidants was beforehand determined with MTT test. We applied 1-Methyl-3-nitro-1-nitrosoguanidine (MNNG) in 125 µM, 400 µM and 800 µM concentrations for 4 and 8 hours on H9c2 cells. H₂O₂ was applied in 150 µM and 300 µM concentration for 0.5 and 4 hours on both models. L2286 was administered in 10 µM, while BGP-15 in 50 µM doses. Cellular levels of the key proteins playing role in mitochondrial dynamics were measured in Western blot samples. For the analysis of mitochondrial network dynamics, we applied electron microscopy and immunocytochemistry. Results: Due to MNNG treatment the level of fusion proteins (OPA1, MFN2) decreased, while the level of fission protein DRP1 elevated markedly. The levels of fusion proteins OPA1 and MNF2 increased in the L2286 and BGP-15 treated groups. During the 8 hour treatment period, the level of DRP1 also increased in the treated cells (p < 0.05). In the H₂O₂ stressed cells, administration of L2286 increased the level of OPA1 in both H9c2 and NRCM models. MFN2 levels in isolated neonatal rat cardiomyocytes raised considerably due to BGP-15 treatment (p < 0.05). L2286 administration decreased the DRP1 level in H9c2 cells (p < 0.05). We observed that the H₂O₂-induced mitochondrial fragmentation could be decreased by L2286 treatment. Conclusion: Our results indicated that the PARP-inhibitor L2286 has beneficial effect on mitochondrial dynamics during oxidative stress scenario, and also in the case of directly induced DNA damage. We could make the similar conclusions in case of BGP-15 administration, which, via reducing ROS accumulation, propagates fusion processes, this way aids preserving cellular viability. Funding: GINOP-2.3.2-15-2016-00049; GINOP-2.3.2-15-2016-00048; GINOP-2.3.3-15-2016-00025; EFOP-3.6.1-16-2016-00004; ÚNKP-17-4-I-PTE-209Keywords: H9c2, mitochondrial dynamics, neonatal rat cardiomyocytes, oxidative stress
Procedia PDF Downloads 1504902 Air Dispersion Model for Prediction Fugitive Landfill Gaseous Emission Impact in Ambient Atmosphere
Authors: Moustafa Osman Mohammed
Abstract:
This paper will explore formation of HCl aerosol at atmospheric boundary layers and encourages the uptake of environmental modeling systems (EMSs) as a practice evaluation of gaseous emissions (“framework measures”) from small and medium-sized enterprises (SMEs). The conceptual model predicts greenhouse gas emissions to ecological points beyond landfill site operations. It focuses on incorporation traditional knowledge into baseline information for both measurement data and the mathematical results, regarding parameters influence model variable inputs. The paper has simplified parameters of aerosol processes based on the more complex aerosol process computations. The simple model can be implemented to both Gaussian and Eulerian rural dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds is taken into account photochemical formulation with exposure effects according to HCl concentrations as starting point of risk assessment. The discussion set out distinctly aspect of sustainability in reflection inputs, outputs, and modes of impact on the environment. Thereby, models incorporate abiotic and biotic species to broaden the scope of integration for both quantification impact and assessment risks. The later environmental obligations suggest either a recommendation or a decision of what is a legislative should be achieved for mitigation measures of landfill gas (LFG) ultimately.Keywords: air pollution, landfill emission, environmental management, monitoring/methods and impact assessment
Procedia PDF Downloads 3214901 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 1504900 Creep Analysis and Rupture Evaluation of High Temperature Materials
Authors: Yuexi Xiong, Jingwu He
Abstract:
The structural components in an energy facility such as steam turbine machines are operated under high stress and elevated temperature in an endured time period and thus the creep deformation and creep rupture failure are important issues that need to be addressed in the design of such components. There are numerous creep models being used for creep analysis that have both advantages and disadvantages in terms of accuracy and efficiency. The Isochronous Creep Analysis is one of the simplified approaches in which a full-time dependent creep analysis is avoided and instead an elastic-plastic analysis is conducted at each time point. This approach has been established based on the rupture dependent creep equations using the well-known Larson-Miller parameter. In this paper, some fundamental aspects of creep deformation and the rupture dependent creep models are reviewed and the analysis procedures using isochronous creep curves are discussed. Four rupture failure criteria are examined from creep fundamental perspectives including criteria of Stress Damage, Strain Damage, Strain Rate Damage, and Strain Capability. The accuracy of these criteria in predicting creep life is discussed and applications of the creep analysis procedures and failure predictions of simple models will be presented. In addition, a new failure criterion is proposed to improve the accuracy and effectiveness of the existing criteria. Comparisons are made between the existing criteria and the new one using several examples materials. Both strain increase and stress relaxation form a full picture of the creep behaviour of a material under high temperature in an endured time period. It is important to bear this in mind when dealing with creep problems. Accordingly there are two sets of rupture dependent creep equations. While the rupture strength vs LMP equation shows how the rupture time depends on the stress level under load controlled condition, the strain rate vs rupture time equation reflects how the rupture time behaves under strain-controlled condition. Among the four existing failure criteria for rupture life predictions, the Stress Damage and Strain Damage Criteria provide the most conservative and non-conservative predictions, respectively. The Strain Rate and Strain Capability Criteria provide predictions in between that are believed to be more accurate because the strain rate and strain capability are more determined quantities than stress to reflect the creep rupture behaviour. A modified Strain Capability Criterion is proposed making use of the two sets of creep equations and therefore is considered to be more accurate than the original Strain Capability Criterion.Keywords: creep analysis, high temperature mateials, rapture evalution, steam turbine machines
Procedia PDF Downloads 2874899 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas
Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards
Abstract:
Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.Keywords: airborne laser scanning, digital terrain models, filtering, forested areas
Procedia PDF Downloads 1384898 The Evaluation of Gravity Anomalies Based on Global Models by Land Gravity Data
Authors: M. Yilmaz, I. Yilmaz, M. Uysal
Abstract:
The Earth system generates different phenomena that are observable at the surface of the Earth such as mass deformations and displacements leading to plate tectonics, earthquakes, and volcanism. The dynamic processes associated with the interior, surface, and atmosphere of the Earth affect the three pillars of geodesy: shape of the Earth, its gravity field, and its rotation. Geodesy establishes a characteristic structure in order to define, monitor, and predict of the whole Earth system. The traditional and new instruments, observables, and techniques in geodesy are related to the gravity field. Therefore, the geodesy monitors the gravity field and its temporal variability in order to transform the geodetic observations made on the physical surface of the Earth into the geometrical surface in which positions are mathematically defined. In this paper, the main components of the gravity field modeling, (Free-air and Bouguer) gravity anomalies are calculated via recent global models (EGM2008, EIGEN6C4, and GECO) over a selected study area. The model-based gravity anomalies are compared with the corresponding terrestrial gravity data in terms of standard deviation (SD) and root mean square error (RMSE) for determining the best fit global model in the study area at a regional scale in Turkey. The least SD (13.63 mGal) and RMSE (15.71 mGal) were obtained by EGM2008 for the Free-air gravity anomaly residuals. For the Bouguer gravity anomaly residuals, EIGEN6C4 provides the least SD (8.05 mGal) and RMSE (8.12 mGal). The results indicated that EIGEN6C4 can be a useful tool for modeling the gravity field of the Earth over the study area.Keywords: free-air gravity anomaly, Bouguer gravity anomaly, global model, land gravity
Procedia PDF Downloads 1674897 Application of Seasonal Autoregressive Integrated Moving Average Model for Forecasting Monthly Flows in Waterval River, South Africa
Authors: Kassahun Birhanu Tadesse, Megersa Olumana Dinka
Abstract:
Reliable future river flow information is basic for planning and management of any river systems. For data scarce river system having only a river flow records like the Waterval River, a univariate time series models are appropriate for river flow forecasting. In this study, a univariate Seasonal Autoregressive Integrated Moving Average (SARIMA) model was applied for forecasting Waterval River flow using GRETL statistical software. Mean monthly river flows from 1960 to 2016 were used for modeling. Different unit root tests and Mann-Kendall trend analysis were performed to test the stationarity of the observed flow time series. The time series was differenced to remove the seasonality. Using the correlogram of seasonally differenced time series, different SARIMA models were identified, their parameters were estimated, and diagnostic check-up of model forecasts was performed using white noise and heteroscedasticity tests. Finally, based on minimum Akaike Information (AIc) and Hannan-Quinn (HQc) criteria, SARIMA (3, 0, 2) x (3, 1, 3)12 was selected as the best model for Waterval River flow forecasting. Therefore, this model can be used to generate future river information for water resources development and management in Waterval River system. SARIMA model can also be used for forecasting other similar univariate time series with seasonality characteristics.Keywords: heteroscedasticity, stationarity test, trend analysis, validation, white noise
Procedia PDF Downloads 2034896 Adapting Inclusive Residential Models to Match Universal Accessibility and Fire Protection
Authors: Patricia Huedo, Maria José Ruá, Raquel Agost-Felip
Abstract:
Ensuring sustainable development of urban environments means guaranteeing adequate environmental conditions, being resilient and meeting conditions of safety and inclusion for all people, regardless of their condition. All existing buildings should meet basic safety conditions and be equipped with safe and accessible routes, along with visual, acoustic and tactile signals to protect their users or potential visitors, and regardless of whether they undergo rehabilitation or change of use processes. Moreover, from a social perspective, we consider the need to prioritize buildings occupied by the most vulnerable groups of people that currently do not have specific regulations tailored to their needs. Some residential models in operation are not only outside the scope of application of the regulations in force; they also lack a project or technical data that would allow knowing the fire behavior of the construction materials. However, the difficulty and cost involved in adapting the entire building stock to current regulations can never justify the lack of safety for people. Hence, this work develops a simplified model to assess compliance with the basic safety conditions in case of fire and its compatibility with the specific accessibility needs of each user. The purpose is to support the designer in decision making, as well as to contribute to the development of a basic fire safety certification tool to be applied in inclusive residential models. This work has developed a methodology to support designers in adapting Social Services Centers, usually intended to vulnerable people. It incorporates a checklist of 9 items and information from sources or standards that designers can use to justify compliance or propose solutions. For each item, the verification system is justified, and possible sources of consultation are provided, considering the possibility of lacking technical documentation of construction systems or building materials. The procedure is based on diagnosing the degree of compliance with fire conditions of residential models used by vulnerable groups, considering the special accessibility conditions required by each user group. Through visual inspection and site surveying, the verification model can serve as a support tool, significantly streamlining the diagnostic phase and reducing the number of tests to be requested by over 75%. This speeds up and simplifies the diagnostic phase. To illustrate the methodology, two different buildings in the Valencian Region (Spain) have been selected. One case study is a mental health facility for residential purposes, located in a rural area, on the outskirts of a small town; the other one, is a day care facility for individuals with intellectual disabilities, located in a medium-sized city. The comparison between the case studies allow to validate the model in distinct conditions. Verifying compliance with a basic security level can allow a quality seal and a public register of buildings adapted to fire regulations to be established, similarly to what is being done with other types of attributes such as energy performance.Keywords: fire safety, inclusive housing, universal accessibility, vulnerable people
Procedia PDF Downloads 224895 User-Perceived Quality Factors for Certification Model of Web-Based System
Authors: Jamaiah H. Yahaya, Aziz Deraman, Abdul Razak Hamdan, Yusmadi Yah Jusoh
Abstract:
One of the most essential issues in software products is to maintain it relevancy to the dynamics of the user’s requirements and expectation. Many studies have been carried out in quality aspect of software products to overcome these problems. Previous software quality assessment models and metrics have been introduced with strengths and limitations. In order to enhance the assurance and buoyancy of the software products, certification models have been introduced and developed. From our previous experiences in certification exercises and case studies collaborating with several agencies in Malaysia, the requirements for user based software certification approach is identified and demanded. The emergence of social network applications, the new development approach such as agile method and other varieties of software in the market have led to the domination of users over the software. As software become more accessible to the public through internet applications, users are becoming more critical in the quality of the services provided by the software. There are several categories of users in web-based systems with different interests and perspectives. The classifications and metrics are identified through brain storming approach with includes researchers, users and experts in this area. The new paradigm in software quality assessment is the main focus in our research. This paper discusses the classifications of users in web-based software system assessment and their associated factors and metrics for quality measurement. The quality model is derived based on IEEE structure and FCM model. The developments are beneficial and valuable to overcome the constraints and improve the application of software certification model in future.Keywords: software certification model, user centric approach, software quality factors, metrics and measurements, web-based system
Procedia PDF Downloads 4054894 Numerical Investigation of a Spiral Bladed Tidal Turbine
Authors: Mohammad Fereidoonnezhad, Seán Leen, Stephen Nash, Patrick McGarry
Abstract:
From the perspective of research innovation, the tidal energy industry is still in its early stages. While a very small number of turbines have progressed to utility-scale deployment, blade breakage is commonly reported due to the enormous hydrodynamic loading applied to devices. The aim of this study is the development of computer simulation technologies for the design of next-generation fibre-reinforced composite tidal turbines. This will require significant technical advances in the areas of tidal turbine testing and multi-scale computational modelling. The complex turbine blade profiles are designed to incorporate non-linear distributions of airfoil sections to optimize power output and self-starting capability while reducing power fluctuations. A number of candidate blade geometries are investigated, ranging from spiral geometries to parabolic geometries, with blades arranged in both cylindrical and spherical configurations on a vertical axis turbine. A combined blade element theory (BET-start-up model) is developed in MATLAB to perform computationally efficient parametric design optimisation for a range of turbine blade geometries. Finite element models are developed to identify optimal fibre-reinforced composite designs to increase blade strength and fatigue life. Advanced fluid-structure-interaction models are also carried out to compute blade deflections following design optimisation.Keywords: tidal turbine, composite materials, fluid-structure-interaction, start-up capability
Procedia PDF Downloads 1214893 An Criterion to Minimize FE Mesh-Dependency in Concrete Plate Subjected to Impact Loading
Authors: Kwak, Hyo-Gyung, Gang, Han Gul
Abstract:
In the context of an increasing need for reliability and safety in concrete structures under blast and impact loading condition, the behavior of concrete under high strain rate condition has been an important issue. Since concrete subjected to impact loading associated with high strain rate shows quite different material behavior from that in the static state, several material models are proposed and used to describe the high strain rate behavior under blast and impact loading. In the process of modelling, in advance, mesh dependency in the used finite element (FE) is the key problem because simulation results under high strain-rate condition are quite sensitive to applied FE mesh size. It means that the accuracy of simulation results may deeply be dependent on FE mesh size in simulations. This paper introduces an improved criterion which can minimize the mesh-dependency of simulation results on the basis of the fracture energy concept, and HJC (Holmquist Johnson Cook), CSC (Continuous Surface Cap) and K&C (Karagozian & Case) models are examined to trace their relative sensitivity to the used FE mesh size. To coincide with the purpose of the penetration test with a concrete plate under a projectile (bullet), the residual velocities of projectile after penetration are compared. The correlation studies between analytical results and the parametric studies associated with them show that the variation of residual velocity with the used FE mesh size is quite reduced by applying a unique failure strain value determined according to the proposed criterion.Keywords: high strain rate concrete, penetration simulation, failure strain, mesh-dependency, fracture energy
Procedia PDF Downloads 5184892 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 1024891 Ideology versus Faith in the Collective Political Identity Formation: An Analysis of the Thoughts of Iqbal and Jinnah-The Founding Fathers of Pakistan
Authors: Muhammad Sajjad-ur-Rehman
Abstract:
Pakistan was meant to be a progressive modern Muslim nation state since its inception in 1947. Its birth was a big hope for the Muslims of Sub-continent to transform their societies on Islamic lines—the promise which made them unite and vote for Pakistan during independence movement. This was the vision put forwarded by Allama Iqbal and Muhammad Ali Jinnah—the two founding fathers of Pakistan. Dwelling on interpretive/ analytical approach, this paper analyzes the thoughts and reflections of Iqbal and Jinnah to understand the issues of collective identity formation in Pakistan. It argues that there may be traced two distinct identity models in the thoughts and reflections of these two leading figures of Pakistan movement: First may be called as ‘faith-based identity model’ while the other may be named as ‘interests-based identity model’. These can also be entitled as ‘Islam-as-faith model’ and ‘Islam-as-ideology model’. Former seeks the diffusion of power by cultural/ faith based means and thus society remains independent in determining its change. While the later goes on to open and expand the power realm by maximizing the role of state in determining the social change. With the help of these models, it can better be explained that what made Pakistani society fail in the collective political identity construction, hindering thus the political potential of the society to be utilized for initiating state formation and societal growth. As a result, today, we see a state that is often rebelled and resisted on the name of ethnicity, religion and sectarianism on one hand and by the ordinary folk when and wherever possible.Keywords: idealogy, Iqbal, Jinnah, identity
Procedia PDF Downloads 64890 Treatment of Healthcare Wastewater Using The Peroxi-Photoelectrocoagulation Process: Predictive Models for Chemical Oxygen Demand, Color Removal, and Electrical Energy Consumption
Authors: Samuel Fekadu A., Esayas Alemayehu B., Bultum Oljira D., Seid Tiku D., Dessalegn Dadi D., Bart Van Der Bruggen A.
Abstract:
The peroxi-photoelectrocoagulation process was evaluated for the removal of chemical oxygen demand (COD) and color from healthcare wastewater. A 2-level full factorial design with center points was created to investigate the effect of the process parameters, i.e., initial COD, H₂O₂, pH, reaction time and current density. Furthermore, the total energy consumption and average current efficiency in the system were evaluated. Predictive models for % COD, % color removal and energy consumption were obtained. The initial COD and pH were found to be the most significant variables in the reduction of COD and color in peroxi-photoelectrocoagulation process. Hydrogen peroxide only has a significant effect on the treated wastewater when combined with other input variables in the process like pH, reaction time and current density. In the peroxi-photoelectrocoagulation process, current density appears not as a single effect but rather as an interaction effect with H₂O₂ in reducing COD and color. Lower energy expenditure was observed at higher initial COD, shorter reaction time and lower current density. The average current efficiency was found as low as 13 % and as high as 777 %. Overall, the study showed that hybrid electrochemical oxidation can be applied effectively and efficiently for the removal of pollutants from healthcare wastewater.Keywords: electrochemical oxidation, UV, healthcare pollutants removals, factorial design
Procedia PDF Downloads 774889 An Interactive Voice Response Storytelling Model for Learning Entrepreneurial Mindsets in Media Dark Zones
Authors: Vineesh Amin, Ananya Agrawal
Abstract:
In a prolonged period of uncertainty and disruptions in the pre-said normal order, non-cognitive skills, especially entrepreneurial mindsets, have become a pillar that can reform the educational models to inform the economy. Dreamverse Learning Lab’s IVR-based storytelling program -Call-a-Kahaani- is an evolving experiment with an aim to kindle entrepreneurial mindsets in the remotest locations of India in an accessible and engaging manner. At the heart of this experiment is the belief that at every phase in our life’s story, we have a choice which brings us closer to achieving our true potential. This interactive program is thus designed using real-time storytelling principles to empower learners, ages 24 and below, to make choices and take decisions as they become more self-aware, practice grit, try new things through stories, guided activities, and interactions, simply over a phone call. This research paper highlights the framework behind an ongoing scalable, data-oriented, low-tech program to kindle entrepreneurial mindsets in media dark zones supported by iterative design and prototyping to reach 13700+ unique learners who made 59000+ calls for 183900+min listening duration to listen to content pieces of around 3 to 4 min, with the last monitored (March 2022) record of 34% serious listenership, within one and a half years of its inception. The paper provides an in-depth account of the technical development, content creation, learning, and assessment frameworks, as well as mobilization models which have been leveraged to build this end-to-end system.Keywords: non-cognitive skills, entrepreneurial mindsets, speech interface, remote learning, storytelling
Procedia PDF Downloads 2084888 Unlocking Green Hydrogen Potential: A Machine Learning-Based Assessment
Authors: Said Alshukri, Mazhar Hussain Malik
Abstract:
Green hydrogen is hydrogen produced using renewable energy sources. In the last few years, Oman aimed to reduce its dependency on fossil fuels. Recently, the hydrogen economy has become a global trend, and many countries have started to investigate the feasibility of implementing this sector. Oman created an alliance to establish the policy and rules for this sector. With motivation coming from both global and local interest in green hydrogen, this paper investigates the potential of producing hydrogen from wind and solar energies in three different locations in Oman, namely Duqm, Salalah, and Sohar. By using machine learning-based software “WEKA” and local metrological data, the project was designed to figure out which location has the highest wind and solar energy potential. First, various supervised models were tested to obtain their prediction accuracy, and it was found that the Random Forest (RF) model has the best prediction performance. The RF model was applied to 2021 metrological data for each location, and the results indicated that Duqm has the highest wind and solar energy potential. The system of one wind turbine in Duqm can produce 8335 MWh/year, which could be utilized in the water electrolysis process to produce 88847 kg of hydrogen mass, while a solar system consisting of 2820 solar cells is estimated to produce 1666.223 MWh/ year which is capable of producing 177591 kg of hydrogen mass.Keywords: green hydrogen, machine learning, wind and solar energies, WEKA, supervised models, random forest
Procedia PDF Downloads 784887 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale
Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize
Abstract:
Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy
Procedia PDF Downloads 984886 Primary Analysis of a Randomized Controlled Trial of Topical Analgesia Post Haemorrhoidectomy
Authors: James Jin, Weisi Xia, Runzhe Gao, Alain Vandal, Darren Svirkis, Andrew Hill
Abstract:
Background: Post-haemorrhoidectomy pain is concerned by patients/clinicians. Minimizing the postoperation pain is highly interested clinically. Combinations of topical cream targeting three hypothesised post-haemorrhoidectomy pain mechanisms were developed and their effectiveness were evaluated. Specifically, a multi-centred double-blinded randomized clinical trial (RCT) was conducted in adults undergoing excisional haemorrhoidectomy. The primary analysis was conveyed on the data collected to evaluate the effectiveness of the combinations of topical cream targeting three hypothesized pain mechanisms after the operations. Methods: 192 patients were randomly allocated to 4 arms (each arm has 48 patients), and each arm was provided with pain cream 10% metronidazole (M), M and 2% diltiazem (MD), M with 4% lidocaine (ML), or MDL, respectively. Patients were instructed to apply topical treatments three times a day for 7 days, and record outcomes for 14 days after the operations. The primary outcome was VAS pain on day 4. Covariates and models were selected in the blind review stage. Multiple imputations were applied for the missingness. LMER, GLMER models together with natural splines were applied. Sandwich estimators and Wald statistics were used. P-values < 0.05 were considered as significant. Conclusions: The addition of topical lidocaine or diltiazem to metronidazole does not add any benefit. ML had significantly better pain and recovery scores than combination MDL. Multimodal topical analgesia with ML after haemorrhoidectomy could be considered for further evaluation. Further trials considering only 3 arms (M, ML, MD) might be worth exploring.Keywords: RCT, primary analysis, multiple imputation, pain scores, haemorrhoidectomy, analgesia, lmer
Procedia PDF Downloads 1164885 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning
Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz
Abstract:
Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics
Procedia PDF Downloads 1174884 Dynamic Process Model for Designing Smart Spaces Based on Context-Awareness and Computational Methods Principles
Authors: Heba M. Jahin, Ali F. Bakr, Zeyad T. Elsayad
Abstract:
As smart spaces can be defined as any working environment which integrates embedded computers, information appliances and multi-modal sensors to remain focused on the interaction between the users, their activity, and their behavior in the space; hence, smart space must be aware of their contexts and automatically adapt to their changing context-awareness, by interacting with their physical environment through natural and multimodal interfaces. Also, by serving the information used proactively. This paper suggests a dynamic framework through the architectural design process of the space based on the principles of computational methods and context-awareness principles to help in creating a field of changes and modifications. It generates possibilities, concerns about the physical, structural and user contexts. This framework is concerned with five main processes: gathering and analyzing data to generate smart design scenarios, parameters, and attributes; which will be transformed by coding into four types of models. Furthmore, connecting those models together in the interaction model which will represent the context-awareness system. Then, transforming that model into a virtual and ambient environment which represents the physical and real environments, to act as a linkage phase between the users and their activities taking place in that smart space . Finally, the feedback phase from users of that environment to be sure that the design of that smart space fulfill their needs. Therefore, the generated design process will help in designing smarts spaces that can be adapted and controlled to answer the users’ defined goals, needs, and activity.Keywords: computational methods, context-awareness, design process, smart spaces
Procedia PDF Downloads 3304883 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities
Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun
Abstract:
The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids
Procedia PDF Downloads 634882 Load Balancing Technique for Energy - Efficiency in Cloud Computing
Authors: Rani Danavath, V. B. Narsimha
Abstract:
Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission
Procedia PDF Downloads 4494881 The Improvement of Turbulent Heat Flux Parameterizations in Tropical GCMs Simulations Using Low Wind Speed Excess Resistance Parameter
Authors: M. O. Adeniyi, R. T. Akinnubi
Abstract:
The parameterization of turbulent heat fluxes is needed for modeling land-atmosphere interactions in Global Climate Models (GCMs). However, current GCMs still have difficulties with producing reliable turbulent heat fluxes for humid tropical regions, which may be due to inadequate parameterization of the roughness lengths for momentum (z0m) and heat (z0h) transfer. These roughness lengths are usually expressed in term of excess resistance factor (κB^(-1)), and this factor is used to account for different resistances for momentum and heat transfers. In this paper, a more appropriate excess resistance factor (〖 κB〗^(-1)) suitable for low wind speed condition was developed and incorporated into the aerodynamic resistance approach (ARA) in the GCMs. Also, the performance of various standard GCMs κB^(-1) schemes developed for high wind speed conditions were assessed. Based on the in-situ surface heat fluxes and profile measurements of wind speed and temperature from Nigeria Micrometeorological Experimental site (NIMEX), new κB^(-1) was derived through application of the Monin–Obukhov similarity theory and Brutsaert theoretical model for heat transfer. Turbulent flux parameterizations with this new formula provides better estimates of heat fluxes when compared with others estimated using existing GCMs κB^(-1) schemes. The derived κB^(-1) MBE and RMSE in the parameterized QH ranged from -1.15 to – 5.10 Wm-2 and 10.01 to 23.47 Wm-2, while that of QE ranged from - 8.02 to 6.11 Wm-2 and 14.01 to 18.11 Wm-2 respectively. The derived 〖 κB〗^(-1) gave better estimates of QH than QE during daytime. The derived 〖 κB〗^(-1)=6.66〖 Re〗_*^0.02-5.47, where Re_* is the Reynolds number. The derived κB^(-1) scheme which corrects a well documented large overestimation of turbulent heat fluxes is therefore, recommended for most regional models within the tropic where low wind speed is prevalent.Keywords: humid, tropic, excess resistance factor, overestimation, turbulent heat fluxes
Procedia PDF Downloads 2004880 The System Dynamics Research of China-Africa Trade, Investment and Economic Growth
Authors: Emma Serwaa Obobisaa, Haibo Chen
Abstract:
International trade and outward foreign direct investment are important factors which are generally recognized in the economic growth and development. Though several scholars have struggled to reveal the influence of trade and outward foreign direct investment (FDI) on economic growth, most studies utilized common econometric models such as vector autoregression and aggregated the variables, which for the most part prompts, however, contradictory and mixed results. Thus, there is an exigent need for the precise study of the trade and FDI effect of economic growth while applying strong econometric models and disaggregating the variables into its separate individual variables to explicate their respective effects on economic growth. This will guarantee the provision of policies and strategies that are geared towards individual variables to ensure sustainable development and growth. This study, therefore, seeks to examine the causal effect of China-Africa trade and Outward Foreign Direct Investment on the economic growth of Africa using a robust and recent econometric approach such as system dynamics model. Our study impanels and tests an ensemble of a group of vital variables predominant in recent studies on trade-FDI-economic growth causality: Foreign direct ınvestment, international trade and economic growth. Our results showed that the system dynamics method provides accurate statistical inference regarding the direction of the causality among the variables than the conventional method such as OLS and Granger Causality predominantly used in the literature as it is more robust and provides accurate, critical values.Keywords: economic growth, outward foreign direct investment, system dynamics model, international trade
Procedia PDF Downloads 103