Search results for: reduced order macro models
22035 The Use of AI to Measure Gross National Happiness
Authors: Riona Dighe
Abstract:
This research attempts to identify an alternative approach to the measurement of Gross National Happiness (GNH). It uses artificial intelligence (AI), incorporating natural language processing (NLP) and sentiment analysis to measure GNH. We use ‘off the shelf’ NLP models responsible for the sentiment analysis of a sentence as a building block for this research. We constructed an algorithm using NLP models to derive a sentiment analysis score against sentences. This was then tested against a sample of 20 respondents to derive a sentiment analysis score. The scores generated resembled human responses. By utilising the MLP classifier, decision tree, linear model, and K-nearest neighbors, we were able to obtain a test accuracy of 89.97%, 54.63%, 52.13%, and 47.9%, respectively. This gave us the confidence to use the NLP models against sentences in websites to measure the GNH of a country.Keywords: artificial intelligence, NLP, sentiment analysis, gross national happiness
Procedia PDF Downloads 12422034 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.Keywords: deep learning, long short term memory, energy, renewable energy load forecasting
Procedia PDF Downloads 26722033 An Association between Stock Index and Macro Economic Variables in Bangladesh
Authors: Shamil Mardi Al Islam, Zaima Ahmed
Abstract:
The aim of this article is to explore whether certain macroeconomic variables such as industrial index, inflation, broad money, exchange rate and deposit rate as a proxy for interest rate are interlinked with Dhaka stock price index (DSEX index) precisely after the introduction of new index by Dhaka Stock Exchange (DSE) since January 2013. Bangladesh stock market has experienced rapid growth since its inception. It might not be a very well-developed capital market as compared to its neighboring counterparts but has been a strong avenue for investment and resource mobilization. The data set considered consists of monthly observations, for a period of four years from January 2013 to June 2018. Findings from cointegration analysis suggest that DSEX and macroeconomic variables have a significant long-run relationship. VAR decomposition based on VAR estimated indicates that money supply explains a significant portion of variation of stock index whereas, inflation is found to have the least impact. Impact of industrial index is found to have a low impact compared to the exchange rate and deposit rate. Policies should there aim to increase industrial production in order to enhance stock market performance. Further reasonable money supply should be ensured by authorities to stimulate stock market performance.Keywords: deposit rate, DSEX, industrial index, VAR
Procedia PDF Downloads 16422032 Predict Suspended Sediment Concentration Using Artificial Neural Networks Technique: Case Study Oued El Abiod Watershed, Algeria
Authors: Adel Bougamouza, Boualam Remini, Abd El Hadi Ammari, Feteh Sakhraoui
Abstract:
The assessment of sediments being carried by a river is importance for planning and designing of various water resources projects. In this study, Artificial Neural Network Techniques are used to estimate the daily suspended sediment concentration for the corresponding daily discharge flow in the upstream of Foum El Gherza dam, Biskra, Algeria. The FFNN, GRNN, and RBNN models are established for estimating current suspended sediment values. Some statistics involving RMSE and R2 were used to evaluate the performance of applied models. The comparison of three AI models showed that the RBNN model performed better than the FFNN and GRNN models with R2 = 0.967 and RMSE= 5.313 mg/l. Therefore, the ANN model had capability to improve nonlinear relationships between discharge flow and suspended sediment with reasonable precision.Keywords: artificial neural network, Oued Abiod watershed, feedforward network, generalized regression network, radial basis network, sediment concentration
Procedia PDF Downloads 42022031 Animal Modes of Surgical or Other External Causes of Trauma Wound Infection
Authors: Ojoniyi Oluwafeyekikunmi Okiki
Abstract:
Notwithstanding advances in disturbing wound care and control, infections remain a main motive of mortality, morbidity, and financial disruption in tens of millions of wound sufferers around the sector. Animal models have become popular gear for analyzing a big selection of outside worrying wound infections and trying out new antimicrobial techniques. This evaluation covers experimental infections in animal models of surgical wounds, pores and skin abrasions, burns, lacerations, excisional wounds, and open fractures. Animal modes of external stressful wound infections stated via extraordinary investigators vary in animal species used, microorganism traces, the quantity of microorganisms carried out, the dimensions of the wounds, and, for burn infections, the period of time the heated object or liquid is in contact with the skin. As antibiotic resistance continues to grow, new antimicrobial procedures are urgently needed. Those have to be examined using popular protocols for infections in external stressful wounds in animal models.Keywords: surgical wounds, animals, wound infections, burns, wound models, colony-forming gadgets, lacerated wounds
Procedia PDF Downloads 1422030 Finite Element Study of Coke Shape Deep Beam to Column Moment Connection Subjected to Cyclic Loading
Authors: Robel Wondimu Alemayehu, Sihwa Jung, Manwoo Park, Young K. Ju
Abstract:
Following the aftermath of the 1994 Northridge earthquake, intensive research on beam to column connections is conducted, leading to the current design basis. The current design codes require the use of either a prequalified connection or a connection that passes the requirements of large-scale cyclic qualification test prior to use in intermediate or special moment frames. The second alternative is expensive both in terms of money and time. On the other hand, the maximum beam depth in most of the prequalified connections is limited to 900mm due to the reduced rotation capacity of deeper beams. However, for long span beams the need to use deeper beams may arise. In this study, a beam to column connection detail suitable for deep beams is presented. The connection detail comprises of thicker-tapered beam flange adjacent to the beam to column connection. Within the thicker-tapered flange region, two reduced beam sections are provided with the objective of forming two plastic hinges within the tapered-thicker flange region. In addition, the length, width, and thickness of the tapered-thicker flange region are proportioned in such a way that a third plastic hinge forms at the end of the tapered-thicker flange region. As a result, the total rotation demand is distributed over three plastic zones. Making it suitable for deeper beams that have lower rotation capacity at one plastic hinge. The effectiveness of this connection detail is studied through finite element analysis. For the study, a beam that has a depth of 1200mm is used. Additionally, comparison with welded unreinforced flange-welded web (WUF-W) moment connection and reduced beam section moment connection is made. The results show that the rotation capacity of a WUF-W moment connection is increased from 2.0% to 2.2% by applying the proposed moment connection detail. Furthermore, the maximum moment capacity, energy dissipation capacity and stiffness of the WUF-W moment connection is increased up to 58%, 49%, and 32% respectively. In contrast, applying the reduced beam section detail to the same WUF-W moment connection reduced the rotation capacity from 2.0% to 1.50% plus the maximum moment capacity and stiffness of the connection is reduced by 22% and 6% respectively. The proposed connection develops three plastic hinge regions as intended and it shows improved performance compared to both WUF-W moment connection and reduced beam section moment connection. Moreover, the achieved rotation capacity satisfies the minimum required for use in intermediate moment frames.Keywords: connections, finite element analysis, seismic design, steel intermediate moment frame
Procedia PDF Downloads 16622029 Characterization of Heterotrimeric G Protein α Subunit in Tomato
Authors: Thi Thao Ninh, Yuri Trusov, José Ramón Botella
Abstract:
Heterotrimeric G proteins, comprised of three subunits, α, β and γ, are involved in signal transduction pathways that mediate a vast number of processes across the eukaryotic kingdom. 23 Gα subunits are present in humans whereas most plant genomes encode for only one canonical Gα. The disparity observed between Arabidopsis, rice, and maize Gα-deficient mutant phenotypes suggest that Gα functions have diversified between eudicots and monocots during evolution. Alternatively, since the only Gα mutations available in dicots have been produced in Arabidopsis, the possibility exists that this species might be an exception to the rule. In order to test this hypothesis, we studied the G protein α subunit (TGA1) in tomato. Four tga1 knockout lines were generated in tomato cultivar Moneymaker using CRISPR/Cas9. The tga1 mutants exhibit a number of auxin-related phenotypes including changes in leaf shape, reduced plant height, fruit size and number of seeds per fruit. In addition, tga1 mutants have increased sensitivity to abscisic acid during seed germination, reduced sensitivity to exogenous auxin during adventitious root formation from cotyledons and excised hypocotyl explants. Our results suggest that Gα mutant phenotypes in tomato are very similar to those observed in monocots, i.e. rice and maize, and cast doubts about the validity of using Arabidopsis as a model system for plant G protein studies.Keywords: auxin-related phenotypes, CRISPR/Cas9, G protein α subunit, heterotrimeric G proteins, tomato
Procedia PDF Downloads 13722028 Probabilistic Models to Evaluate Seismic Liquefaction In Gravelly Soil Using Dynamic Penetration Test and Shear Wave Velocity
Authors: Nima Pirhadi, Shao Yong Bo, Xusheng Wan, Jianguo Lu, Jilei Hu
Abstract:
Although gravels and gravelly soils are assumed to be non-liquefiable because of high conductivity and small modulus; however, the occurrence of this phenomenon in some historical earthquakes, especially recently earthquakes during 2008 Wenchuan, Mw= 7.9, 2014 Cephalonia, Greece, Mw= 6.1 and 2016, Kaikoura, New Zealand, Mw = 7.8, has been promoted the essential consideration to evaluate risk assessment and hazard analysis of seismic gravelly soil liquefaction. Due to the limitation in sampling and laboratory testing of this type of soil, in situ tests and site exploration of case histories are the most accepted procedures. Of all in situ tests, dynamic penetration test (DPT), Which is well known as the Chinese dynamic penetration test, and shear wave velocity (Vs) test, have been demonstrated high performance to evaluate seismic gravelly soil liquefaction. However, the lack of a sufficient number of case histories provides an essential limitation for developing new models. This study at first investigates recent earthquakes that caused liquefaction in gravelly soils to collect new data. Then, it adds these data to the available literature’s dataset to extend them and finally develops new models to assess seismic gravelly soil liquefaction. To validate the presented models, their results are compared to extra available models. The results show the reasonable performance of the proposed models and the critical effect of gravel content (GC)% on the assessment.Keywords: liquefaction, gravel, dynamic penetration test, shear wave velocity
Procedia PDF Downloads 20122027 Transverse Vibration of Elastic Beam Resting on Variable Elastic Foundation Subjected to moving Load
Authors: Idowu Ibikunle Albert, Atilade Adesanya Oluwafemi, Okedeyi Abiodun Sikiru, Mustapha Rilwan Adewale
Abstract:
These present-day all areas of transport have experienced large advances characterized by increases in the speeds and weight of vehicles. As a result, this paper considered the Transverse Vibration of an Elastic Beam Resting on a Variable Elastic Foundation Subjected to a moving Load. The beam is presumed to be uniformly distributed and has simple support at both ends. The moving distributed moving mass is assumed to move with constant velocity. The governing equations, which are fourth-order partial differential equations, were reduced to second-order partial differential equations using an analytical method in terms of series solution and solved by a numerical method using mathematical software (Maple). Results show that an increase in the values of beam parameters, moving Mass M, and k-stiffness K, significantly reduces the deflection profile of the vibrating beam. In the results, it was equally found that moving mass is greater than moving force.Keywords: elastic beam, moving load, response of structure, variable elastic foundation
Procedia PDF Downloads 12222026 Predictive Models for Compressive Strength of High Performance Fly Ash Cement Concrete for Pavements
Authors: S. M. Gupta, Vanita Aggarwal, Som Nath Sachdeva
Abstract:
The work reported through this paper is an experimental work conducted on High Performance Concrete (HPC) with super plasticizer with the aim to develop some models suitable for prediction of compressive strength of HPC mixes. In this study, the effect of varying proportions of fly ash (0% to 50% at 10% increment) on compressive strength of high performance concrete has been evaluated. The mix designs studied were M30, M40 and M50 to compare the effect of fly ash addition on the properties of these concrete mixes. In all eighteen concrete mixes have been designed, three as conventional concretes for three grades under discussion and fifteen as HPC with fly ash with varying percentages of fly ash. The concrete mix designing has been done in accordance with Indian standard recommended guidelines i.e. IS: 10262. All the concrete mixes have been studied in terms of compressive strength at 7 days, 28 days, 90 days and 365 days. All the materials used have been kept same throughout the study to get a perfect comparison of values of results. The models for compressive strength prediction have been developed using Linear Regression method (LR), Artificial Neural Network (ANN) and Leave One Out Validation (LOOV) methods.Keywords: high performance concrete, fly ash, concrete mixes, compressive strength, strength prediction models, linear regression, ANN
Procedia PDF Downloads 44622025 Evaluating the Suitability and Performance of Dynamic Modulus Predictive Models for North Dakota’s Asphalt Mixtures
Authors: Duncan Oteki, Andebut Yeneneh, Daba Gedafa, Nabil Suleiman
Abstract:
Most agencies lack the equipment required to measure the dynamic modulus (|E*|) of asphalt mixtures, necessitating the need to use predictive models. This study compared measured |E*| values for nine North Dakota asphalt mixes using the original Witczak, modified Witczak, and Hirsch models. The influence of temperature on the |E*| models was investigated, and Pavement ME simulations were conducted using measured |E*| and predictions from the most accurate |E*| model. The results revealed that the original Witczak model yielded the lowest Se/Sy and highest R² values, indicating the lowest bias and highest accuracy, while the poorest overall performance was exhibited by the Hirsch model. Using predicted |E*| as inputs in the Pavement ME generated conservative distress predictions compared to using measured |E*|. The original Witczak model was recommended for predicting |E*| for low-reliability pavements in North Dakota.Keywords: asphalt mixture, binder, dynamic modulus, MEPDG, pavement ME, performance, prediction
Procedia PDF Downloads 4922024 Real Time Detection, Prediction and Reconstitution of Rain Drops
Authors: R. Burahee, B. Chassinat, T. de Laclos, A. Dépée, A. Sastim
Abstract:
The purpose of this paper is to propose a solution to detect, predict and reconstitute rain drops in real time – during the night – using an embedded material with an infrared camera. To prevent the system from needing too high hardware resources, simple models are considered in a powerful image treatment algorithm reducing considerably calculation time in OpenCV software. Using a smart model – drops will be matched thanks to a process running through two consecutive pictures for implementing a sophisticated tracking system. With this system drops computed trajectory gives information for predicting their future location. Thanks to this technique, treatment part can be reduced. The hardware system composed by a Raspberry Pi is optimized to host efficiently this code for real time execution.Keywords: reconstitution, prediction, detection, rain drop, real time, raspberry, infrared
Procedia PDF Downloads 42022023 Comparison Approach for Wind Resource Assessment to Determine Most Precise Approach
Authors: Tasir Khan, Ishfaq Ahmad, Yejuan Wang, Muhammad Salam
Abstract:
Distribution models of the wind speed data are essential to assess the potential wind speed energy because it decreases the uncertainty to estimate wind energy output. Therefore, before performing a detailed potential energy analysis, the precise distribution model for data relating to wind speed must be found. In this research, material from numerous criteria goodness-of-fits, such as Kolmogorov Simonov, Anderson Darling statistics, Chi-Square, root mean square error (RMSE), AIC and BIC were combined finally to determine the wind speed of the best-fitted distribution. The suggested method collectively makes each criterion. This method was useful in a circumstance to fitting 14 distribution models statistically with the data of wind speed together at four sites in Pakistan. The consequences show that this method provides the best source for selecting the most suitable wind speed statistical distribution. Also, the graphical representation is consistent with the analytical results. This research presents three estimation methods that can be used to calculate the different distributions used to estimate the wind. In the suggested MLM, MOM, and MLE the third-order moment used in the wind energy formula is a key function because it makes an important contribution to the precise estimate of wind energy. In order to prove the presence of the suggested MOM, it was compared with well-known estimation methods, such as the method of linear moment, and maximum likelihood estimate. In the relative analysis, given to several goodness-of-fit, the presentation of the considered techniques is estimated on the actual wind speed evaluated in different time periods. The results obtained show that MOM certainly provides a more precise estimation than other familiar approaches in terms of estimating wind energy based on the fourteen distributions. Therefore, MOM can be used as a better technique for assessing wind energy.Keywords: wind-speed modeling, goodness of fit, maximum likelihood method, linear moment
Procedia PDF Downloads 8522022 An Experimental Study on the Effects of Aspect Ratio of a Rectangular Microchannel on the Two-Phase Frictional Pressure Drop
Authors: J. A. Louw Coetzee, Josua P. Meyer
Abstract:
The thermodynamic properties of different refrigerants in combination with the variation in geometrical properties (hydraulic diameter, aspect ratio, and inclination angle) of a rectangular microchannel determine the two-phase frictional pressure gradient. The effect of aspect ratio on frictional pressure drop had not been investigated enough during adiabatic two-phase flow and condensation in rectangular microchannels. This experimental study was concerned with measurement of the frictional pressure gradient in a rectangular microchannel, with hydraulic diameter of 900 μm. The aspect ratio of this microchannel was varied over a range that stretched from 0.3 to 3 in order to capture the effect of aspect ratio variation. A commonly used refrigerant, R134a, was used in the tests that spanned over a mass flux range of 100 to 1000 kg m-2 s-1 as well as the whole vapour quality range. This study formed part of a refrigerant condensation experiment and was therefore conducted at a saturation temperature of 40 °C. The study found that there was little influence of the aspect ratio on the frictional pressure drop at the test conditions. The data was compared to some of the well known micro- and macro-channel two-phase pressure drop correlations. Most of the separated flow correlations predicted the pressure drop data well at mass fluxes larger than 400 kg m-2 s-1 and vapour qualities above 0.2.Keywords: aspect ratio, microchannel, two-phase, pressure gradient
Procedia PDF Downloads 36622021 Domain specific Ontology-Based Knowledge Extraction Using R-GNN and Large Language Models
Authors: Andrey Khalov
Abstract:
The rapid proliferation of unstructured data in IT infrastructure management demands innovative approaches for extracting actionable knowledge. This paper presents a framework for ontology-based knowledge extraction that combines relational graph neural networks (R-GNN) with large language models (LLMs). The proposed method leverages the DOLCE framework as the foundational ontology, extending it with concepts from ITSMO for domain-specific applications in IT service management and outsourcing. A key component of this research is the use of transformer-based models, such as DeBERTa-v3-large, for automatic entity and relationship extraction from unstructured texts. Furthermore, the paper explores how transfer learning techniques can be applied to fine-tune large language models (LLaMA) for using to generate synthetic datasets to improve precision in BERT-based entity recognition and ontology alignment. The resulting IT Ontology (ITO) serves as a comprehensive knowledge base that integrates domain-specific insights from ITIL processes, enabling more efficient decision-making. Experimental results demonstrate significant improvements in knowledge extraction and relationship mapping, offering a cutting-edge solution for enhancing cognitive computing in IT service environments.Keywords: ontology mapping, R-GNN, knowledge extraction, large language models, NER, knowlege graph
Procedia PDF Downloads 1922020 Explaining the Impact of Poverty Risk on Frailty Trajectories in Old Age Using Growth Curve Models
Authors: Erwin Stolz, Hannes Mayerl, Anja Waxenegger, Wolfgang Freidl
Abstract:
Research has often found poverty associated with adverse health outcomes, but it is unclear which (interplay of) mechanisms actually translate low economic resources into poor physical health. The goal of this study was to assess the impact of educational, material, psychosocial and behavioural factors in explaining the poverty-health association in old age. We analysed 28,360 observations from 11,390 community-dwelling respondents (65+) from the Survey of Health, Ageing and Retirement in Europe (SHARE, 2004-2013, 10 countries). We used multilevel growth curve models to assess the impact of combined income- and asset poverty risk on old age frailty index levels and trajectories. In total, 61.8% of the variation of poverty risk on frailty levels could be explained by direct and indirect effects, thereby highlighting the role of material and particularly psychosocial factors, such as perceived control and social isolation. We suggest strengthening social policy and public health efforts in order to fight poverty and its deleterious effects from early age on and to broaden the scope of interventions with regard to psychosocial factors.Keywords: frailty, health inequality, old age, poverty
Procedia PDF Downloads 33422019 A CD40 Variant is Associated with Systemic Bone Loss Among Patients with Rheumatoid Arthritis
Authors: Rim Sghiri, Samia Al Shouli, Hana Benhassine, Nejla Elamri, Zahid Shakoor, Foued Slama, Adel Almogren, Hala Zeglaoui, Elyes Bouajina, Ramzi Zemni
Abstract:
Objectives: Little is known about genes predisposing to systemic bone loss (SBL) in rheumatoid arthritis (RA). Therefore, we examined the association between SBL and a variant of CD40 gene, which is known to play a critical role in both immune response and bone homeostasis among patients with RA. Methods: CD40 rs48104850 was genotyped in 176 adult RA patients. Bone mineral density (BMD) was measured using dual-energy X-ray absorptiometry (DXA). Results: Low BMD was observed in 116 (65.9%) patients. Among them, 60 (34.1%) had low femoral neck (FN) Z score, 72 (40.9%) had low total femur (TF) Z score, and 105 (59.6%) had low lumbar spine (LS) Z score. CD40 rs4810485 was found to be associated with reduced TF Z score with the CD40 rs4810485 T allele protecting against reduced TF Z score (OR = 0.40, 95% CI = 0.23-0.68, p = 0.0005). This association was confirmed in the multivariate logistic regression analysis (OR=0.31, 95% CI= 0.16-0.59, p=3.84 x 10₋₄). Moreover, median FN BMD was reduced among RA patients with CD40 rs4810485 GG genotype compared to RA patients harbouring CD40 rs4810485 TT and GT genotypes (0.788± 0.136 versus 0.826± 0.146g/cm², p=0.001). Conclusion: This study, for the first time ever, demonstrated an association between a CD40 genetic variant and SBL among patients with RA.Keywords: rheumatoid arthritis, CD40 gene, bone mineral density, systemic bone loss, rs48104850
Procedia PDF Downloads 46222018 Multilevel Modelling of Modern Contraceptive Use in Nigeria: Analysis of the 2013 NDHS
Authors: Akiode Ayobami, Akiode Akinsewa, Odeku Mojisola, Salako Busola, Odutolu Omobola, Nuhu Khadija
Abstract:
Purpose: Evidence exists that family planning use can contribute to reduction in infant and maternal mortality in any country. Despite these benefits, contraceptive use in Nigeria still remains very low, only 10% among married women. Understanding factors that predict contraceptive use is very important in order to improve the situation. In this paper, we analysed data from the 2013 Nigerian Demographic and Health Survey (NDHS) to better understand predictors of contraceptive use in Nigeria. The use of logistics regression and other traditional models in this type of situation is not appropriate as they do not account for social structure influence brought about by the hierarchical nature of the data on response variable. We therefore used multilevel modelling to explore the determinants of contraceptive use in order to account for the significant variation in modern contraceptive use by socio-demographic, and other proximate variables across the different Nigerian states. Method: This data has a two-level hierarchical structure. We considered the data of 26, 403 married women of reproductive age at level 1 and nested them within the 36 states and the Federal Capital Territory, Abuja at level 2. We modelled use of modern contraceptive against demographic variables, being told about FP at health facility, heard of FP on TV, Magazine or radio, husband desire for more children nested within the state. Results: Our results showed that the independent variables in the model were significant predictors of modern contraceptive use. The estimated variance component for the null model, random intercept, and random slope models were significant (p=0.00), indicating that the variation in contraceptive use across the Nigerian states is significant, and needs to be accounted for in order to accurately determine the predictors of contraceptive use, hence the data is best fitted by the multilevel model. Only being told about family planning at the health facility and religion have a significant random effect, implying that their predictability of contraceptive use varies across the states. Conclusion and Recommendation: Results showed that providing FP information at the health facility and religion needs to be considered when programming to improve contraceptive use at the state levels.Keywords: multilevel modelling, family planning, predictors, Nigeria
Procedia PDF Downloads 42022017 Findings on Modelling Carbon Dioxide Concentration Scenarios in the Nairobi Metropolitan Region before and during COVID-19
Authors: John Okanda Okwaro
Abstract:
Carbon (IV) oxide (CO₂) is emitted majorly from fossil fuel combustion and industrial production. The sources of interest of carbon (IV) oxide in the study area are mining activities, transport systems, and industrial processes. This study is aimed at building models that will help in monitoring the emissions within the study area. Three scenarios were discussed, namely: pessimistic scenario, business-as-usual scenario, and optimistic scenario. The result showed that there was a reduction in carbon dioxide concentration by approximately 50.5 ppm between March 2020 and January 2021 inclusive. This is majorly due to reduced human activities that led to decreased consumption of energy. Also, the CO₂ concentration trend follows the business-as-usual scenario (BAU) path. From the models, the pessimistic, business-as-usual, and optimistic scenarios give CO₂ concentration of about 545.9 ppm, 408.1 ppm, and 360.1 ppm, respectively, on December 31st, 2021. This research helps paint the picture to the policymakers of the relationship between energy sources and CO₂ emissions. Since the reduction in CO₂ emission was due to decreased use of fossil fuel as there was a decrease in economic activities, then if Kenya relies more on green energy than fossil fuel in the post-COVID-19 period, there will be more CO₂ emission reduction. That is, the CO₂ concentration trend is likely to follow the optimistic scenario path, hence a reduction in CO₂ concentration of about 48 ppm by the end of the year 2021. This research recommends investment in solar energy by energy-intensive companies, mine machinery and equipment maintenance, investment in electric vehicles, and doubling tree planting efforts to achieve the 10% cover.Keywords: forecasting, greenhouse gas, green energy, hierarchical data format
Procedia PDF Downloads 16822016 PM10 Prediction and Forecasting Using CART: A Case Study for Pleven, Bulgaria
Authors: Snezhana G. Gocheva-Ilieva, Maya P. Stoimenova
Abstract:
Ambient air pollution with fine particulate matter (PM10) is a systematic permanent problem in many countries around the world. The accumulation of a large number of measurements of both the PM10 concentrations and the accompanying atmospheric factors allow for their statistical modeling to detect dependencies and forecast future pollution. This study applies the classification and regression trees (CART) method for building and analyzing PM10 models. In the empirical study, average daily air data for the city of Pleven, Bulgaria for a period of 5 years are used. Predictors in the models are seven meteorological variables, time variables, as well as lagged PM10 variables and some lagged meteorological variables, delayed by 1 or 2 days with respect to the initial time series, respectively. The degree of influence of the predictors in the models is determined. The selected best CART models are used to forecast future PM10 concentrations for two days ahead after the last date in the modeling procedure and show very accurate results.Keywords: cross-validation, decision tree, lagged variables, short-term forecasting
Procedia PDF Downloads 19622015 Systematic Exploration and Modulation of Nano-Bio Interactions
Authors: Bing Yan
Abstract:
Nanomaterials are widely used in various industrial sectors, biomedicine, and more than 1300 consumer products. Although there is still no standard safety regulation, their potential toxicity is a major concern worldwide. We discovered that nanoparticles target and enter human cells1, perturb cellular signaling pathways2, affect various cell functions3, and cause malfunctions in animals4,5. Because the majority of atoms in nanoparticles are on the surface, chemistry modification on their surface may change their biological properties significantly. We modified nanoparticle surface using nano-combinatorial chemistry library approach6. Novel nanoparticles were discovered to exhibit significantly reduced toxicity6,7, enhance cancer targeting ability8, or re-program cellular signaling machineries7. Using computational chemistry, quantitative nanostructure-activity relationship (QNAR) is established and predictive models have been built to predict biocompatible nanoparticles.Keywords: nanoparticle, nanotoxicity, nano-bio, nano-combinatorial chemistry, nanoparticle library
Procedia PDF Downloads 40922014 JaCoText: A Pretrained Model for Java Code-Text Generation
Authors: Jessica Lopez Espejel, Mahaman Sanoussi Yahaya Alassan, Walid Dahhane, El Hassane Ettifouri
Abstract:
Pretrained transformer-based models have shown high performance in natural language generation tasks. However, a new wave of interest has surged: automatic programming language code generation. This task consists of translating natural language instructions to a source code. Despite the fact that well-known pre-trained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformer neural network. It aims to generate java source code from natural language text. JaCoText leverages the advantages of both natural language and code generation models. More specifically, we study some findings from state of the art and use them to (1) initialize our model from powerful pre-trained models, (2) explore additional pretraining on our java dataset, (3) lead experiments combining the unimodal and bimodal data in training, and (4) scale the input and output length during the fine-tuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new state-of-the-art results.Keywords: java code generation, natural language processing, sequence-to-sequence models, transformer neural networks
Procedia PDF Downloads 28822013 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm
Authors: Zachary Huffman, Joana Rocha
Abstract:
Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations
Procedia PDF Downloads 13522012 Programmatic Actions of Social Welfare State in Service to Justice: Law, Society and the Third Sector
Authors: Bruno Valverde Chahaira, Matheus Jeronimo Low Lopes, Marta Beatriz Tanaka Ferdinandi
Abstract:
This paper proposes to dissect the meanings and / or directions of the State, in order, to present the State models to elaborate a conceptual framework about its function in the legal scope. To do so, it points out the possible contracts established between the State and the Society, since the general principles immanent in them can guide the models of society in force. From this orientation arise the contracts, whose purpose is by the effect to modify the status (the being and / or the opinion) of each of the subjects in presence - State and Society. In this logic, this paper announces the fiduciary contracts and “veredicção”(portuguese word) contracts, from the perspective of semiotics discourse (or greimasian). Therefore, studies focus on the issue of manifest language in unilateral and bilateral or reciprocal relations between the State and Society. Thus, under the biases of the model of the communicative situation and discourse, the guidelines of these contractual relations will be analyzed in order to see if there is a pragmatic sanction: positive when the contract is signed between the subjects (reward), or negative when the contract between they are broken (punishment). In this way, a third path emerges which, in this specific case, passes through the subject-third sector. In other words, the proposal, which is systemic in nature, is to analyze whether, since the contract of the welfare state is not carried out in the constitutional program on fundamental rights: education, health, housing, an others. Therefore, in the structure of the exchange demanded by the society according to its contractual obligations (others), the third way (Third Sector) advances in the empty space left by the State. In this line, it presents the modalities of action of the third sector in the social scope. Finally, the normative communication organization of these three subjects is sought in the pragmatic model of discourse, namely: State, Society and Third Sector, in an attempt to understand the constant dynamics in the Law and in the language of the relations established between them.Keywords: access to justice, state, social rights, third sector
Procedia PDF Downloads 14522011 Physicochemical and Sensorial Evaluation of Astringency Reduction in Cashew Apple (Annacardium occidentale L.) Powder Processing in Cookie Elaboration
Authors: Elida Gastelum-Martinez, Neith A. Pacheco-Lopez, Juan L. Morales-Landa
Abstract:
Cashew agroindustry obtained from cashew apple crop (Anacardium occidentale L.) generates large amounts of unused waste in Campeche, Mexico. Despite having a high content of nutritional compounds such as ascorbic acid, carotenoids, fiber, carbohydrates, and minerals, it is not consumed due to its astringent sensation. The aim of this work was to develop a processing method for cashew apple waste in order to obtain a powder with reduced astringency able to be used as an additive in the food industry. The processing method consisted first in reducing astringency by inducing tannins from cashew apple peel to react and form precipitating complexes with a colloid rich in proline and histidine. Then cashew apples were processed to obtain a dry powder. Astringency reduction was determined by total phenolic content and evaluated by sensorial analysis in cashew-apple-powder based cookies. Total phenolic content in processed powders showed up to 72% lower concentration compared to control samples. The sensorial evaluation indicated that cookies baked using cashew apple powder with reduced astringency were 96.8% preferred. Sensorial characteristics like texture, color and taste were also well-accepted attributes. In conclusion, the method applied for astringency reduction is a viable tool to produce cashew apple powder with desirable sensorial properties to be used in the development of food products.Keywords: astringency reduction, cashew apple waste, food industry, sensorial evaluation
Procedia PDF Downloads 35122010 Human Resource Utilization Models for Graceful Ageing
Authors: Chuang-Chun Chiou
Abstract:
In this study, a systematic framework of graceful ageing has been used to explore the possible human resource utilization models for graceful ageing purpose. This framework is based on the Chinese culture. We call ‘Nine-old’ target. They are ageing gracefully with feeding, accomplishment, usefulness, learning, entertainment, care, protection, dignity, and termination. This study is focused on two areas: accomplishment and usefulness. We exam the current practices of initiatives and laws of promoting labor participation. That is to focus on how to increase Labor Force Participation Rate of the middle aged as well as the elderly and try to promote the elderly to achieve graceful ageing. Then we present the possible models that support graceful ageing.Keywords: human resource utilization model, labor participation, graceful ageing, employment
Procedia PDF Downloads 39022009 Development of Computational Approach for Calculation of Hydrogen Solubility in Hydrocarbons for Treatment of Petroleum
Authors: Abdulrahman Sumayli, Saad M. AlShahrani
Abstract:
For the hydrogenation process, knowing the solubility of hydrogen (H2) in hydrocarbons is critical to improve the efficiency of the process. We investigated the H2 solubility computation in four heavy crude oil feedstocks using machine learning techniques. Temperature, pressure, and feedstock type were considered as the inputs to the models, while the hydrogen solubility was the sole response. Specifically, we employed three different models: Support Vector Regression (SVR), Gaussian process regression (GPR), and Bayesian ridge regression (BRR). To achieve the best performance, the hyper-parameters of these models are optimized using the whale optimization algorithm (WOA). We evaluated the models using a dataset of solubility measurements in various feedstocks, and we compared their performance based on several metrics. Our results show that the WOA-SVR model tuned with WOA achieves the best performance overall, with an RMSE of 1.38 × 10− 2 and an R-squared of 0.991. These findings suggest that machine learning techniques can provide accurate predictions of hydrogen solubility in different feedstocks, which could be useful in the development of hydrogen-related technologies. Besides, the solubility of hydrogen in the four heavy oil fractions is estimated in different ranges of temperatures and pressures of 150 ◦C–350 ◦C and 1.2 MPa–10.8 MPa, respectivelyKeywords: temperature, pressure variations, machine learning, oil treatment
Procedia PDF Downloads 6922008 Environmental Modeling of Storm Water Channels
Authors: L. Grinis
Abstract:
Turbulent flow in complex geometries receives considerable attention due to its importance in many engineering applications. It has been the subject of interest for many researchers. Some of these interests include the design of storm water channels. The design of these channels requires testing through physical models. The main practical limitation of physical models is the so called “scale effect”, that is, the fact that in many cases only primary physical mechanisms can be correctly represented, while secondary mechanisms are often distorted. These observations form the basis of our study, which centered on problems associated with the design of storm water channels near the Dead Sea, in Israel. To help reach a final design decision we used different physical models. Our research showed good coincidence with the results of laboratory tests and theoretical calculations, and allowed us to study different effects of fluid flow in an open channel. We determined that problems of this nature cannot be solved only by means of theoretical calculation and computer simulation. This study demonstrates the use of physical models to help resolve very complicated problems of fluid flow through baffles and similar structures. The study applies these models and observations to different construction and multiphase water flows, among them, those that include sand and stone particles, a significant attempt to bring to the testing laboratory a closer association with reality.Keywords: open channel, physical modeling, baffles, turbulent flow
Procedia PDF Downloads 28522007 Application of the Least Squares Method in the Adjustment of Chlorodifluoromethane (HCFC-142b) Regression Models
Authors: L. J. de Bessa Neto, V. S. Filho, J. V. Ferreira Nunes, G. C. Bergamo
Abstract:
There are many situations in which human activities have significant effects on the environment. Damage to the ozone layer is one of them. The objective of this work is to use the Least Squares Method, considering the linear, exponential, logarithmic, power and polynomial models of the second degree, to analyze through the coefficient of determination (R²), which model best fits the behavior of the chlorodifluoromethane (HCFC-142b) in parts per trillion between 1992 and 2018, as well as estimates of future concentrations between 5 and 10 periods, i.e. the concentration of this pollutant in the years 2023 and 2028 in each of the adjustments. A total of 809 observations of the concentration of HCFC-142b in one of the monitoring stations of gases precursors of the deterioration of the ozone layer during the period of time studied were selected and, using these data, the statistical software Excel was used for make the scatter plots of each of the adjustment models. With the development of the present study, it was observed that the logarithmic fit was the model that best fit the data set, since besides having a significant R² its adjusted curve was compatible with the natural trend curve of the phenomenon.Keywords: chlorodifluoromethane (HCFC-142b), ozone, least squares method, regression models
Procedia PDF Downloads 12522006 Production Optimization under Geological Uncertainty Using Distance-Based Clustering
Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe
Abstract:
It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization
Procedia PDF Downloads 144