Search results for: energy demand model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24559

Search results for: energy demand model

16009 Neural Network Based Fluctuation Frequency Control in PV-Diesel Hybrid Power System

Authors: Heri Suryoatmojo, Adi Kurniawan, Feby A. Pamuji, Nursalim, Syaffaruddin, Herbert Innah

Abstract:

Photovoltaic (PV) system hybrid with diesel system is utilized widely for electrification in remote area. PV output power fluctuates due to uncertainty condition of temperature and sun irradiance. When the penetration of PV power is large, the reliability of the power utility will be disturbed and seriously impact the unstable frequency of system. Therefore, designing a robust frequency controller in PV-diesel hybrid power system is very important. This paper proposes new method of frequency control application in hybrid PV-diesel system based on artificial neural network (ANN). This method can minimize the frequency deviation without smoothing PV output power that controlled by maximum power point tracking (MPPT) method. The neural network algorithm controller considers average irradiance, change of irradiance and frequency deviation. In order the show the effectiveness of proposed algorithm, the addition of battery as energy storage system is also presented. To validate the proposed method, the results of proposed system are compared with the results of similar system using MPPT only. The simulation results show that the proposed method able to suppress frequency deviation smaller compared to the results of system using MPPT only.

Keywords: energy storage system, frequency deviation, hybrid power generation, neural network algorithm

Procedia PDF Downloads 485
16008 Deep Reinforcement Learning Model Using Parameterised Quantum Circuits

Authors: Lokes Parvatha Kumaran S., Sakthi Jay Mahenthar C., Sathyaprakash P., Jayakumar V., Shobanadevi A.

Abstract:

With the evolution of technology, the need to solve complex computational problems like machine learning and deep learning has shot up. But even the most powerful classical supercomputers find it difficult to execute these tasks. With the recent development of quantum computing, researchers and tech-giants strive for new quantum circuits for machine learning tasks, as present works on Quantum Machine Learning (QML) ensure less memory consumption and reduced model parameters. But it is strenuous to simulate classical deep learning models on existing quantum computing platforms due to the inflexibility of deep quantum circuits. As a consequence, it is essential to design viable quantum algorithms for QML for noisy intermediate-scale quantum (NISQ) devices. The proposed work aims to explore Variational Quantum Circuits (VQC) for Deep Reinforcement Learning by remodeling the experience replay and target network into a representation of VQC. In addition, to reduce the number of model parameters, quantum information encoding schemes are used to achieve better results than the classical neural networks. VQCs are employed to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and the target network.

Keywords: quantum computing, quantum machine learning, variational quantum circuit, deep reinforcement learning, quantum information encoding scheme

Procedia PDF Downloads 106
16007 Systematic Study of Structure Property Relationship in Highly Crosslinked Elastomers

Authors: Natarajan Ramasamy, Gurulingamurthy Haralur, Ramesh Nivarthu, Nikhil Kumar Singha

Abstract:

Elastomers are polymeric materials with varied backbone architectures ranging from linear to dendrimeric structures and wide varieties of monomeric repeat units. These elastomers show strongly viscous and weakly elastic when it is not cross-linked. But when crosslinked, based on the extent the properties of these elastomers can range from highly flexible to highly stiff nature. Lightly cross-linked systems are well studied and reported. Understanding the nature of highly cross-linked rubber based upon chemical structure and architecture is critical for varieties of applications. One of the critical parameters is cross-link density. In the current work, we have studied the highly cross-linked state of linear, lightly branched to star-shaped branched elastomers and determined the cross-linked density by using different models. Change in hardness, shift in Tg, change in modulus and swelling behavior were measured experimentally as a function of the extent of curing. These properties were analyzed using varied models to determine cross-link density. We used hardness measurements to examine cure time. Hardness to the extent of curing relationship is determined. It is well known that micromechanical transitions like Tg and storage modulus are related to the extent of crosslinking. The Tg of the elastomer in different crosslinked state was determined by DMA, and based on plateau modulus the crosslink density is estimated by using Nielsen’s model. Usually for lightly crosslinked systems, based on equilibrium swelling ratio in solvent the cross link density is estimated by using Flory–Rhener model. When it comes to highly crosslinked system, Flory-Rhener model is not valid because of smaller chain length. So models based on the assumption of polymer as a Non-Gaussian chain like 1) Helmis–Heinrich–Straube (HHS) model, 2) Gloria M.gusler and Yoram Cohen Model, 3) Barbara D. Barr-Howell and Nikolaos A. Peppas model is used for estimating crosslink density. In this work, correction factors are determined to the existing models and based upon it structure-property relationship of highly crosslinked elastomers was studied.

Keywords: dynamic mechanical analysis, glass transition temperature, parts per hundred grams of rubber, crosslink density, number of networks per unit volume of elastomer

Procedia PDF Downloads 152
16006 A Comprehensive Review of Artificial Intelligence Applications in Sustainable Building

Authors: Yazan Al-Kofahi, Jamal Alqawasmi.

Abstract:

In this study, a comprehensive literature review (SLR) was conducted, with the main goal of assessing the existing literature about how artificial intelligence (AI), machine learning (ML), deep learning (DL) models are used in sustainable architecture applications and issues including thermal comfort satisfaction, energy efficiency, cost prediction and many others issues. For this reason, the search strategy was initiated by using different databases, including Scopus, Springer and Google Scholar. The inclusion criteria were used by two research strings related to DL, ML and sustainable architecture. Moreover, the timeframe for the inclusion of the papers was open, even though most of the papers were conducted in the previous four years. As a paper filtration strategy, conferences and books were excluded from database search results. Using these inclusion and exclusion criteria, the search was conducted, and a sample of 59 papers was selected as the final included papers in the analysis. The data extraction phase was basically to extract the needed data from these papers, which were analyzed and correlated. The results of this SLR showed that there are many applications of ML and DL in Sustainable buildings, and that this topic is currently trendy. It was found that most of the papers focused their discussions on addressing Environmental Sustainability issues and factors using machine learning predictive models, with a particular emphasis on the use of Decision Tree algorithms. Moreover, it was found that the Random Forest repressor demonstrates strong performance across all feature selection groups in terms of cost prediction of the building as a machine-learning predictive model.

Keywords: machine learning, deep learning, artificial intelligence, sustainable building

Procedia PDF Downloads 46
16005 Biaxial Fatigue Specimen Design and Testing Rig Development

Authors: Ahmed H. Elkholy

Abstract:

An elastic analysis is developed to obtain the distribution of stresses, strains, bending moment and deformation for a thin hollow, variable thickness cylindrical specimen when subjected to different biaxial loadings. The specimen was subjected to a combination of internal pressure, axial tensile loading and external pressure. Several axial to circumferential stress ratios were investigated in detail. The analytical model was then validated using experimental results obtained from a test rig using several biaxial loadings. Based on the preliminary results obtained, the specimen was then modified geometrically to ensure uniform strain distribution through its wall thickness and along its gauge length. The new design of the specimen has a higher buckling strength and a maximum value of equivalent stress according to the maximum distortion energy theory. A cyclic function generator of the standard servo-controlled, electro-hydraulic testing machine is used to generate a specific signal shape (sine, square,…) at a certain frequency. The two independent controllers of the electronic circuit cause an independent movement to each servo-valve piston. The movement of each piston pressurizes the upper and lower sides of the actuators alternately. So, the specimen will be subjected to axial and diametral loads independent of each other. The hydraulic system has two different pressures: one pressure will be responsible for axial stress produced in the specimen and the other will be responsible for the tangential stress. Changing the two pressure ratios will change the stress ratios accordingly. The only restriction on the maximum stress obtained is the capacity of the testing system and specimen instability due to buckling.

Keywords: biaxial, fatigue, stress, testing

Procedia PDF Downloads 113
16004 Effect of Birks Constant and Defocusing Parameter on Triple-to-Double Coincidence Ratio Parameter in Monte Carlo Simulation-GEANT4

Authors: Farmesk Abubaker, Francesco Tortorici, Marco Capogni, Concetta Sutera, Vincenzo Bellini

Abstract:

This project concerns with the detection efficiency of the portable triple-to-double coincidence ratio (TDCR) at the National Institute of Metrology of Ionizing Radiation (INMRI-ENEA) which allows direct activity measurement and radionuclide standardization for pure-beta emitter or pure electron capture radionuclides. The dependency of the simulated detection efficiency of the TDCR, by using Monte Carlo simulation Geant4 code, on the Birks factor (kB) and defocusing parameter has been examined especially for low energy beta-emitter radionuclides such as 3H and 14C, for which this dependency is relevant. The results achieved in this analysis can be used for selecting the best kB factor and the defocusing parameter for computing theoretical TDCR parameter value. The theoretical results were compared with the available ones, measured by the ENEA TDCR portable detector, for some pure-beta emitter radionuclides. This analysis allowed to improve the knowledge of the characteristics of the ENEA TDCR detector that can be used as a traveling instrument for in-situ measurements with particular benefits in many applications in the field of nuclear medicine and in the nuclear energy industry.

Keywords: Birks constant, defocusing parameter, GEANT4 code, TDCR parameter

Procedia PDF Downloads 134
16003 An Information Matrix Goodness-of-Fit Test of the Conditional Logistic Model for Matched Case-Control Studies

Authors: Li-Ching Chen

Abstract:

The case-control design has been widely applied in clinical and epidemiological studies to investigate the association between risk factors and a given disease. The retrospective design can be easily implemented and is more economical over prospective studies. To adjust effects for confounding factors, methods such as stratification at the design stage and may be adopted. When some major confounding factors are difficult to be quantified, a matching design provides an opportunity for researchers to control the confounding effects. The matching effects can be parameterized by the intercepts of logistic models and the conditional logistic regression analysis is then adopted. This study demonstrates an information-matrix-based goodness-of-fit statistic to test the validity of the logistic regression model for matched case-control data. The asymptotic null distribution of this proposed test statistic is inferred. It needs neither to employ a simulation to evaluate its critical values nor to partition covariate space. The asymptotic power of this test statistic is also derived. The performance of the proposed method is assessed through simulation studies. An example of the real data set is applied to illustrate the implementation of the proposed method as well.

Keywords: conditional logistic model, goodness-of-fit, information matrix, matched case-control studies

Procedia PDF Downloads 277
16002 Numerical Study on Response of Polymer Electrolyte Fuel Cell (PEFCs) with Defects under Different Load Conditions

Authors: Muhammad Faizan Chinannai, Jaeseung Lee, Mohamed Hassan Gundu, Hyunchul Ju

Abstract:

Fuel cell is known to be an effective renewable energy resource which is commercializing in the present era. It is really important to know about the improvement in performance even when the system faces some defects. This study was carried out to analyze the performance of the Polymer electrolyte fuel cell (PEFCs) under different operating conditions such as current density, relative humidity and Pt loadings considering defects with load changes. The purpose of this study is to analyze the response of the fuel cell system with defects in Balance of Plants (BOPs) and catalyst layer (CL) degradation by maintaining the coolant flow rate as such to preserve the cell temperature at the required level. Multi-Scale Simulation of 3D two-phase PEFC model with coolant was carried out under different load conditions. For detailed analysis and performance comparison, extensive contours of temperature, current density, water content, and relative humidity are provided. The simulation results of the different cases are compared with the reference data. Hence the response of the fuel cell stack with defects in BOP and CL degradations can be analyzed by the temperature difference between the coolant outlet and membrane electrode assembly. The results showed that the Failure of the humidifier increases High-Frequency Resistance (HFR), air flow defects and CL degradation results in the non-uniformity of current density distribution and high cathode activation overpotential, respectively.

Keywords: PEM fuel cell, fuel cell modeling, performance analysis, BOP components, current density distribution, degradation

Procedia PDF Downloads 200
16001 Multi Agent System Architecture Oriented Prometheus Methodology Design for Reverse Logistics

Authors: F. Lhafiane, A. Elbyed, M. Bouchoum

Abstract:

The design of Reverse logistics Network has attracted growing attention with the stringent pressures from both environmental awareness and business sustainability. Reverse logistical activities include return, remanufacture, disassemble and dispose of products can be quite complex to manage. In addition, demand can be difficult to predict, and decision making is one of the challenges tasks. This complexity has amplified the need to develop an integrated architecture for product return as an enterprise system. The main purpose of this paper is to design Multi agent system (MAS) architecture using the Prometheus methodology to efficiently manage reverse logistics processes. The proposed MAS architecture includes five types of agents: Gate keeping Agent, Collection Agent, Sorting Agent, Processing Agent and Disposal Agent which act respectively during the five steps of reverse logistics Network.

Keywords: reverse logistics, multi agent system, prometheus methodology

Procedia PDF Downloads 451
16000 Grammar as a Logic of Labeling: A Computer Model

Authors: Jacques Lamarche, Juhani Dickinson

Abstract:

This paper introduces a computational model of a Grammar as Logic of Labeling (GLL), where the lexical primitives of morphosyntax are phonological matrixes, the form of words, understood as labels that apply to realities (or targets) assumed to be outside of grammar altogether. The hypothesis is that even though a lexical label relates to its target arbitrarily, this label in a complex (constituent) label is part of a labeling pattern which, depending on its value (i.e., N, V, Adj, etc.), imposes language-specific restrictions on what it targets outside of grammar (in the world/semantics or in cognitive knowledge). Lexical forms categorized as nouns, verbs, adjectives, etc., are effectively targets of labeling patterns in use. The paper illustrates GLL through a computer model of basic patterns in English NPs. A constituent label is a binary object that encodes: i) alignment of input forms so that labels occurring at different points in time are understood as applying at once; ii) endocentric structuring - every grammatical constituent has a head label that determines the target of the constituent, and a limiter label (the non-head) that restricts this target. The N or A values are restricted to limiter label, the two differing in terms of alignment with a head. Consider the head initial DP ‘the dog’: the label ‘dog’ gets an N value because it is a limiter that is evenly aligned with the head ‘the’, restricting application of the DP. Adapting a traditional analysis of ‘the’ to GLL – apply label to something familiar – the DP targets and identifies one reality familiar to participants by applying to it the label ‘dog’ (singular). Consider next the DP ‘the large dog’: ‘large dog’ is nominal by even alignment with ‘the’, as before, and since ‘dog’ is the head of (head final) ‘large dog’, it is also nominal. The label ‘large’, however, is adjectival by narrow alignment with the head ‘dog’: it doesn’t target the head but targets a property of what dog applies to (a property or value of attribute). In other words, the internal composition of constituents determines that a form targets a property or a reality: ‘large’ and ‘dog’ happen to be valid targets to realize this constituent. In the presentation, the computer model of the analysis derives the 8 possible sequences of grammatical values with three labels after the determiner (the x y z): 1- D [ N [ N N ]]; 2- D [ A [ N N ] ]; 3- D [ N [ A N ] ]; 4- D [ A [ A N ] ]; 5- D [ [ N N ] N ]; 5- D [ [ A N ] N ]; 6- D [ [ N A ] N ] 7- [ [ N A ] N ] 8- D [ [ Adv A ] N ]. This approach that suggests that a computer model of these grammatical patterns could be used to construct ontologies/knowledge using speakers’ judgments about the validity of lexical meaning in grammatical patterns.

Keywords: syntactic theory, computational linguistics, logic and grammar, semantics, knowledge and grammar

Procedia PDF Downloads 15
15999 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method

Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson

Abstract:

Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.

Keywords: adversarial examples, attack, computer vision, image processing

Procedia PDF Downloads 178
15998 Proposal of a Model Supporting Decision-Making on Information Security Risk Treatment

Authors: Ritsuko Kawasaki, Takeshi Hiromatsu

Abstract:

Management is required to understand all information security risks within an organization, and to make decisions on which information security risks should be treated in what level by allocating how much amount of cost. However, such decision-making is not usually easy, because various measures for risk treatment must be selected with the suitable application levels. In addition, some measures may have objectives conflicting with each other. It also makes the selection difficult. Therefore, this paper provides a model which supports the selection of measures by applying multi-objective analysis to find an optimal solution. Additionally, a list of measures is also provided to make the selection easier and more effective without any leakage of measures.

Keywords: information security risk treatment, selection of risk measures, risk acceptance, multi-objective optimization

Procedia PDF Downloads 364
15997 Municipal Action Against Urbanisation-Induced Warming: Case Studies from Jordan, Zambia, and Germany

Authors: Muna Shalan

Abstract:

Climate change is a systemic challenge for cities, with its impacts not happening in isolation but rather intertwined, thus increasing hazards and the vulnerability of the exposed population. The increase in the frequency and intensity of heat waves, for example, is associated with multiple repercussions on the quality of life of city inhabitants, including health discomfort, a rise in mortality and morbidity, increasing energy demand for cooling, and shrinking of green areas due to drought. To address the multi-faceted impact of urbanisation-induced warming, municipalities and local governments are challenged with devising strategies and implementing effective response measures. Municipalities are recognising the importance of guiding urban concepts to drive climate action in the urban environment. An example is climate proofing, which refers to a process of mainstreaming climate change into development strategies and programs, i.e., urban planning is viewed through a climate change lens. There is a multitude of interconnected aspects that are critical to paving the path toward climate-proofing of urban areas and avoiding poor planning of layouts and spatial arrangements. Navigating these aspects through an analysis of the overarching practices governing municipal planning processes, which is the focus of this research, will highlight entry points to improve procedures, methods, and data availability for optimising planning processes and municipal actions. By employing a case study approach, the research investigates how municipalities in different contexts, namely in the city of Sahab in Jordan, Chililabombwe in Zambia, and the city of Dortmund in Germany, are integrating guiding urban concepts to shrink the deficit in adaptation and mitigation and achieve climate proofing goals in their respective local contexts. The analysis revealed municipal strategies and measures undertaken to optimize existing building and urban design regulations by introducing key performance indicators and improving in-house capacity. Furthermore, the analysis revealed that establishing or optimising interdepartmental communication frameworks or platforms is key to strengthening the steering structures governing local climate action. The most common challenge faced by municipalities is related to their role as a regulator and implementers, particularly in budget analysis and instruments for cost recovery of climate action measures. By leading organisational changes related to improving procedures and methods, municipalities can mitigate the various challenges that may emanate from uncoordinated planning and thus promote action against urbanisation-induced warming.

Keywords: urbanisation-induced warming, response measures, municipal planning processes, key performance indicators, interdepartmental communication frameworks, cost recovery

Procedia PDF Downloads 57
15996 Financial Inclusion for Inclusive Growth in an Emerging Economy

Authors: Godwin Chigozie Okpara, William Chimee Nwaoha

Abstract:

The paper set out to stress on how financial inclusion index could be calculated and also investigated the impact of inclusive finance on inclusive growth in an emerging economy. In the light of these objectives, chi-wins method was used to calculate indexes of financial inclusion while co-integration and error correction model were used for evaluation of the impact of financial inclusion on inclusive growth. The result of the analysis revealed that financial inclusion while having a long-run relationship with GDP growth is an insignificant function of the growth of the economy. The speed of adjustment is correctly signed and significant. On the basis of these results, the researchers called for tireless efforts of government and banking sector in promoting financial inclusion in developing countries.

Keywords: chi-wins index, co-integration, error correction model, financial inclusion

Procedia PDF Downloads 636
15995 Intelligent Diagnostic System of the Onboard Measuring Devices

Authors: Kyaw Zin Htut

Abstract:

In this article, the synthesis of the efficiency of intelligent diagnostic system in the aircraft measuring devices is described. The technology developments of the diagnostic system are considered based on the model errors of the gyro instruments, which are used to measure the parameters of the aircraft. The synthesis of the diagnostic intelligent system is considered on the example of the problem of assessment and forecasting errors of the gyroscope devices on the onboard aircraft. The result of the system is to detect of faults of the aircraft measuring devices as well as the analysis of the measuring equipment to improve the efficiency of its work.

Keywords: diagnostic, dynamic system, errors of gyro instruments, model errors, assessment, prognosis

Procedia PDF Downloads 385
15994 Damage-Based Seismic Design and Evaluation of Reinforced Concrete Bridges

Authors: Ping-Hsiung Wang, Kuo-Chun Chang

Abstract:

There has been a common trend worldwide in the seismic design and evaluation of bridges towards the performance-based method where the lateral displacement or the displacement ductility of bridge column is regarded as an important indicator for performance assessment. However, the seismic response of a bridge to an earthquake is a combined result of cyclic displacements and accumulated energy dissipation, causing damage to the bridge, and hence the lateral displacement (ductility) alone is insufficient to tell its actual seismic performance. This study aims to propose a damage-based seismic design and evaluation method for reinforced concrete bridges on the basis of the newly developed capacity-based inelastic displacement spectra. The capacity-based inelastic displacement spectra that comprise an inelastic displacement ratio spectrum and a corresponding damage state spectrum was constructed by using a series of nonlinear time history analyses and a versatile, smooth hysteresis model. The smooth model could take into account the effects of various design parameters of RC bridge columns and correlates the column’s strength deterioration with the Park and Ang’s damage index. It was proved that the damage index not only can be used to accurately predict the onset of strength deterioration, but also can be a good indicator for assessing the actual visible damage condition of column regardless of its loading history (i.e., similar damage index corresponds to similar actual damage condition for the same designed columns subjected to very different cyclic loading protocols as well as earthquake loading), providing a better insight into the seismic performance of bridges. Besides, the computed spectra show that the inelastic displacement ratio for far-field ground motions approximately conforms to the equal displacement rule when structural period is larger than around 0.8 s, but that for near-fault ground motions departs from the rule in the whole considered spectral regions. Furthermore, the near-fault ground motions would lead to significantly greater inelastic displacement ratio and damage index than far-field ground motions and most of the practical design scenarios cannot survive the considered near-fault ground motion when the strength reduction factor of bridge is not less than 5.0. Finally, the spectrum formula is presented as a function of structural period, strength reduction factor, and various column design parameters for far-field and near-fault ground motions by means of the regression analysis of the computed spectra. And based on the developed spectrum formula, a design example of a bridge is presented to illustrate the proposed damage-based seismic design and evaluation method where the damage state of the bridge is used as the performance objective.

Keywords: damage index, far-field, near-fault, reinforced concrete bridge, seismic design and evaluation

Procedia PDF Downloads 115
15993 A Study of Social Media Users’ Switching Behavior

Authors: Chiao-Chen Chang, Yang-Chieh Chin

Abstract:

Social media has created a change in the way the network community is clustered, especially from the location of the community, from the original virtual space to the intertwined network, and thus the communication between people will change from face to face communication to social media-based communication model. However, social media users who have had a fixed engagement may have an intention to switch to another service provider because of the emergence of new forms of social media. For example, some of Facebook or Twitter users switched to Instagram in 2014 because of social media messages or image overloads, and users may seek simpler and instant social media to become their main social networking tool. This study explores the impact of system features overload, information overload, social monitoring concerns, problematic use and privacy concerns as the antecedents on social media fatigue, dissatisfaction, and alternative attractiveness; further influence social media switching. This study also uses the online questionnaire survey method to recover the sample data, and then confirm the factor analysis, path analysis, model fit analysis and mediating analysis with the structural equation model (SEM). Research findings demonstrated that there were significant effects on multiple paths. Based on the research findings, this study puts forward the implications of theory and practice.

Keywords: social media, switching, social media fatigue, alternative attractiveness

Procedia PDF Downloads 127
15992 River Stage-Discharge Forecasting Based on Multiple-Gauge Strategy Using EEMD-DWT-LSSVM Approach

Authors: Farhad Alizadeh, Alireza Faregh Gharamaleki, Mojtaba Jalilzadeh, Houshang Gholami, Ali Akhoundzadeh

Abstract:

This study presented hybrid pre-processing approach along with a conceptual model to enhance the accuracy of river discharge prediction. In order to achieve this goal, Ensemble Empirical Mode Decomposition algorithm (EEMD), Discrete Wavelet Transform (DWT) and Mutual Information (MI) were employed as a hybrid pre-processing approach conjugated to Least Square Support Vector Machine (LSSVM). A conceptual strategy namely multi-station model was developed to forecast the Souris River discharge more accurately. The strategy used herein was capable of covering uncertainties and complexities of river discharge modeling. DWT and EEMD was coupled, and the feature selection was performed for decomposed sub-series using MI to be employed in multi-station model. In the proposed feature selection method, some useless sub-series were omitted to achieve better performance. Results approved efficiency of the proposed DWT-EEMD-MI approach to improve accuracy of multi-station modeling strategies.

Keywords: river stage-discharge process, LSSVM, discrete wavelet transform, Ensemble Empirical Decomposition Mode, multi-station modeling

Procedia PDF Downloads 164
15991 Prompt Photons Production in Compton Scattering of Quark-Gluon and Annihilation of Quark-Antiquark Pair Processes

Authors: Mohsun Rasim Alizada, Azar Inshalla Ahmdov

Abstract:

Prompt photons are perhaps the most versatile tools for studying the dynamics of relativistic collisions of heavy ions. The study of photon radiation is of interest that in most hadron interactions, photons fly out as a background to other studied signals. The study of the birth of prompt photons in nucleon-nucleon collisions was previously carried out in experiments on Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). Due to the large energy of colliding nucleons, in addition to prompt photons, many different elementary particles are born. However, the birth of additional elementary particles makes it difficult to determine the accuracy of the effective section of the birth of prompt photons. From this point of view, the experiments planned on the Nuclotron-based Ion Collider Facility (NICA) complex will have a great advantage, since the energy obtained for colliding heavy ions will reduce the number of additionally born elementary particles. Of particular importance is the study of the processes of birth of prompt photons to determine the gluon leaving hadrons since the photon carries information about a rigid subprocess. At present, paper production of prompt photon in Compton scattering of quark-gluon and annihilation of quark–antiquark processes is investigated. The matrix elements Compton scattering of quark-gluon and annihilation of quark-antiquark pair processes has been written. The Square of matrix elements of processes has been calculated in FeynCalc. The phase volume of subprocesses has been determined. Expression to calculate the differential cross-section of subprocesses has been obtained: Given the resulting expressions for the square of the matrix element in the differential section expression, we see that the differential section depends not only on the energy of colliding protons, but also on the mass of quarks, etc. Differential cross-section of subprocesses is estimated. It is shown that the differential cross-section of subprocesses decreases with the increasing energy of colliding protons. Asymmetry coefficient with polarization of colliding protons is determined. The calculation showed that the squares of the matrix element of the Compton scattering process without and taking into account the polarization of colliding protons are identical. The asymmetry coefficient of this subprocess is zero, which is consistent with the literary data. It is known that in any single polarization processes with a photon, squares of matrix elements without taking into account and taking into account the polarization of the original particle must coincide, that is, the terms in the square of the matrix element with the degree of polarization are equal to zero. The coincidence of the squares of the matrix elements indicates that the parity of the system is preserved. The asymmetry coefficient of annihilation of quark–antiquark pair process linearly decreases from positive unit to negative unit with increasing the production of the polarization degrees of colliding protons. Thus, it was obtained that the differential cross-section of the subprocesses decreases with the increasing energy of colliding protons. The value of the asymmetry coefficient is maximal when the polarization of colliding protons is opposite and minimal when they are directed equally. Taking into account the polarization of only the initial quarks and gluons in Compton scattering does not contribute to the differential section of the subprocess.

Keywords: annihilation of a quark-antiquark pair, coefficient of asymmetry, Compton scattering, effective cross-section

Procedia PDF Downloads 136
15990 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 111
15989 Emulsified Oil Removal in Produced Water by Graphite-Based Adsorbents Using Adsorption Coupled with Electrochemical Regeneration

Authors: Zohreh Fallah, Edward P. L. Roberts

Abstract:

One of the big challenges for produced water treatment is removing oil from water in the form of emulsified droplets which are not easily separated. An attractive approach is adsorption, as it is a simple and effective process. However, adsorbents must be regenerated in order to make the process cost effective. Several sorbents have been tested for treating oily wastewater. However, some issues such as high energy consumption for activated carbon thermal regeneration have been reported. Due to their significant electrical conductivity, Graphite Intercalation Compounds (GIC) were found to be suitable to be regenerated electrochemically. They are non-porous materials with low surface area and fast adsorptive capacity which are useful for removal of low concentration of organics. An innovative adsorption/regeneration process has been developed at the University of Manchester in which adsorption of organics are done by using a patented GIC adsorbent coupled with subsequent electrochemical regeneration. The oxidation of adsorbed organics enables 100% regeneration so that the adsorbent can be reused over multiple adsorption cycles. GIC adsorbents are capable of removing a wide range of organics and pollutants; however, no comparable report is available for removal of emulsified oil in produced water using abovementioned process. In this study the performance of this technology for the removal of emulsified oil in wastewater was evaluated. Batch experiments were carried out to determine the adsorption kinetics and equilibrium isotherm for both real produced water and model emulsions. The amount of oil in wastewater was measured by using the toluene extraction/fluorescence analysis before and after adsorption and electrochemical regeneration cycles. It was found that oil in water emulsion could be successfully treated by the treatment process and More than 70% of oil was removed.

Keywords: adsorption, electrochemical regeneration, emulsified oil, produced water

Procedia PDF Downloads 571
15988 Water Self Sufficient: Creating a Sustainable Water System Based on Urban Harvest Approach in La Serena, Chile

Authors: Zulfikar Dinar Wahidayat Putra

Abstract:

Water scarcity become a major challenge in an arid area. One of the arid areas is La Serena city in the Northern Chile which become a case study of this paper. Based on that, this paper tries to identify a sustainable water system by using urban harvest approach as a method to achieve water self-sufficiency for a neighborhood area in the La Serena city. By using the method, it is possible to create sustainable water system in the neighborhood area by reducing up to 38% of water demand and 94% of wastewater production even though water self-sufficient cannot be fully achieved, because of its dependency to the drinking water supply from water treatment plant of La Serena city.

Keywords: arid area, sustainable water system, urban harvest approach, self-sufficiency

Procedia PDF Downloads 252
15987 A Summary-Based Text Classification Model for Graph Attention Networks

Authors: Shuo Liu

Abstract:

In Chinese text classification tasks, redundant words and phrases can interfere with the formation of extracted and analyzed text information, leading to a decrease in the accuracy of the classification model. To reduce irrelevant elements, extract and utilize text content information more efficiently and improve the accuracy of text classification models. In this paper, the text in the corpus is first extracted using the TextRank algorithm for abstraction, the words in the abstract are used as nodes to construct a text graph, and then the graph attention network (GAT) is used to complete the task of classifying the text. Testing on a Chinese dataset from the network, the classification accuracy was improved over the direct method of generating graph structures using text.

Keywords: Chinese natural language processing, text classification, abstract extraction, graph attention network

Procedia PDF Downloads 80
15986 Supplier Relationship Management Model for Sme’s E-Commerce Transaction Broker Case Study: Hotel Rooms Provider

Authors: Veronica S. Moertini, Niko Ibrahim, Verliyantina

Abstract:

As market intermediary firms, e-commerce transaction broker firms need to strongly collaborate with suppliers in order to develop brands seek by customers. Developing suitable electronic Supplier Relationship Management (e-SRM) system is the solution to the need. In this paper, we propose our concept of e-SRM for transaction brokers owned by small medium enterprises (SMEs), which includes the integrated e-SRM and e-CRM architecture, the e-SRM applications with their functions. We then discuss the customization and implementation of the proposed e-SRM model in a specific transaction broker selling hotel rooms, which owned by an SME, KlikHotel.com. The implementation of the e-SRM in KlikHotel.com has been successfully boosting the number of suppliers (hotel members) and hotel room sales.

Keywords: e-CRM, e-SRM, SME, transaction broker

Procedia PDF Downloads 480
15985 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms

Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic

Abstract:

Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.

Keywords: adsorption, diffusion, non-linear flow, shale gas production

Procedia PDF Downloads 152
15984 Numerical Modeling of hybrid Photovoltaic-Thermoelectric Solar Unit by Applying Various Cross-Sections of Cooling Ducts

Authors: Ziba Khalili, Mohsen Sheikholeslami, Ladan Momayez

Abstract:

Combining the photovoltaic/thermal (PVT) systems with a thermoelectric (TE) module can raise energy yields since the TE module boosts the system's energy conversion efficiency. In the current study, a PVT system integrated with a TE module was designed and simulated in ANSYS Fluent 19.2. A copper heat transfer tube (HTT) was employed for cooling the photovoltaic (PV) cells. Four different shapes of HTT cross-section, i.e., circular, square, elliptical, and triangular, with equal cross-section areas were investigated. Also, the influence of Cu-Al2O3/water hybrid nanofluid (0.024% volume concentration), fluid inlet velocity (uᵢ ), and amount of solar radiation (G), on the PV temperature (Tₚᵥ) and system performance were investigated. The ambient temperature (Tₐ), wind speed (u𝓌), and fluid inlet temperature (Tᵢ), were considered to be 25°C, 1 m/s, and 27°C, respectively. According to the obtained data, the triangular case had the greatest impact on reducing the compared to other cases. In the triangular case, examination of the effect of hybrid nanofluid showed that the use of hybrid nanofluid at 800 W/m2 led to a reduction of the TPV by 0.6% compared to water, at 0.19 m/s. Moreover, the thermal efficiency ( ) and the overall electrical efficiency (nₜ) of the system improved by 0.93% and 0.22%, respectively, at 0.19 m/s. In a triangular case where G and were 800 W/m2 and 19 m/s, respectively, the highest amount of, thermal power (Eₜ), and, were obtained as 72.76%, 130.84 W and 12.03%, respectively.

Keywords: electrical performance, photovoltaic/thermal, thermoelectric, hybrid nanofluid, thermal efficiency

Procedia PDF Downloads 65
15983 Application of Supervised Deep Learning-based Machine Learning to Manage Smart Homes

Authors: Ahmed Al-Adaileh

Abstract:

Renewable energy sources, domestic storage systems, controllable loads and machine learning technologies will be key components of future smart homes management systems. An energy management scheme that uses a Deep Learning (DL) approach to support the smart home management systems, which consist of a standalone photovoltaic system, storage unit, heating ventilation air-conditioning system and a set of conventional and smart appliances, is presented. The objective of the proposed scheme is to apply DL-based machine learning to predict various running parameters within a smart home's environment to achieve maximum comfort levels for occupants, reduced electricity bills, and less dependency on the public grid. The problem is using Reinforcement learning, where decisions are taken based on applying the Continuous-time Markov Decision Process. The main contribution of this research is the proposed framework that applies DL to enhance the system's supervised dataset to offer unlimited chances to effectively support smart home systems. A case study involving a set of conventional and smart appliances with dedicated processing units in an inhabited building can demonstrate the validity of the proposed framework. A visualization graph can show "before" and "after" results.

Keywords: smart homes systems, machine learning, deep learning, Markov Decision Process

Procedia PDF Downloads 181
15982 Application of Neuro-Fuzzy Technique for Optimizing the PVC Membrane Sensor

Authors: Majid Rezayi, Sh. Shahaboddin, HNM E. Mahmud, A. Yadollah, A. Saeid, A. Yatimah

Abstract:

In this study, the adaptive neuro-fuzzy inference system (ANFIS) was applied to obtain the membrane composition model affecting the potential response of our reported polymeric PVC sensor for determining the titanium (III) ions. The performance statistics of the artificial neural network (ANN) and linear regression models for potential slope prediction of membrane composition of titanium (III) ion selective electrode were compared with ANFIS technique. The results show that the ANFIS model can be used as a practical tool for obtaining the Nerntian slope of the proposed sensor in this study.

Keywords: adaptive neuro fuzzy inference, PVC sensor, titanium (III) ions, Nerntian slope

Procedia PDF Downloads 265
15981 Production Factor Coefficients Transition through the Lens of State Space Model

Authors: Kanokwan Chancharoenchai

Abstract:

Economic growth can be considered as an important element of countries’ development process. For developing countries, like Thailand, to ensure the continuous growth of the economy, the Thai government usually implements various policies to stimulate economic growth. They may take the form of fiscal, monetary, trade, and other policies. Because of these different aspects, understanding factors relating to economic growth could allow the government to introduce the proper plan for the future economic stimulating scheme. Consequently, this issue has caught interest of not only policymakers but also academics. This study, therefore, investigates explanatory variables for economic growth in Thailand from 2005 to 2017 with a total of 52 quarters. The findings would contribute to the field of economic growth and become helpful information to policymakers. The investigation is estimated throughout the production function with non-linear Cobb-Douglas equation. The rate of growth is indicated by the change of GDP in the natural logarithmic form. The relevant factors included in the estimation cover three traditional means of production and implicit effects, such as human capital, international activity and technological transfer from developed countries. Besides, this investigation takes the internal and external instabilities into account as proxied by the unobserved inflation estimation and the real effective exchange rate (REER) of the Thai baht, respectively. The unobserved inflation series are obtained from the AR(1)-ARCH(1) model, while the unobserved REER of Thai baht is gathered from naive OLS-GARCH(1,1) model. According to empirical results, the AR(|2|) equation which includes seven significant variables, namely capital stock, labor, the imports of capital goods, trade openness, the REER of Thai baht uncertainty, one previous GDP, and the world financial crisis in 2009 dummy, presents the most suitable model. The autoregressive model is assumed constant estimator that would somehow cause the unbias. However, this is not the case of the recursive coefficient model from the state space model that allows the transition of coefficients. With the powerful state space model, it provides the productivity or effect of each significant factor more in detail. The state coefficients are estimated based on the AR(|2|) with the exception of the one previous GDP and the 2009 world financial crisis dummy. The findings shed the light that those factors seem to be stable through time since the occurrence of the world financial crisis together with the political situation in Thailand. These two events could lower the confidence in the Thai economy. Moreover, state coefficients highlight the sluggish rate of machinery replacement and quite low technology of capital goods imported from abroad. The Thai government should apply proactive policies via taxation and specific credit policy to improve technological advancement, for instance. Another interesting evidence is the issue of trade openness which shows the negative transition effect along the sample period. This could be explained by the loss of price competitiveness to imported goods, especially under the widespread implementation of free trade agreement. The Thai government should carefully handle with regulations and the investment incentive policy by focusing on strengthening small and medium enterprises.

Keywords: autoregressive model, economic growth, state space model, Thailand

Procedia PDF Downloads 135
15980 Effect of Structural Change on Productivity Convergence: A Panel Unit Root Analysis

Authors: Amjad Naveed

Abstract:

This study analysed the role of structural change in the process of labour productivity convergence at country and regional levels. Many forms of structural changes occurred within the European Union (EU) countries i.e. variation in sectoral employment share, changes in demand for products, variations in trade patterns and advancement in technology which may have an influence on the process of convergence. Earlier studies on convergence have neglected the role of structural changes which can have resulted in different conclusion on the nature of convergence. The contribution of this study is to examine the role of structural change in testing labour productivity convergence at various levels. For the empirical purpose, the data of 19 EU countries, 259 regions and 6 industries is used for the period of 1991-2009. The results indicate that convergence varies across regional and country levels for different industries when considered the role of structural change.

Keywords: labor produvitivty, convergence, structural change, panel unit root

Procedia PDF Downloads 264