Search results for: hysteresis envelope model
12948 Optimize Study and Optical Characterization of Bilayer Structures from Silicon Nitride
Authors: Beddiaf Abdelaziz
Abstract:
The optical characteristics of thin films of silicon oxynitride SiOₓNy prepared by the Low-Pressure Chemical Vapor Deposition (LPCVD) technique have been studied. The films are elaborated from the SiH₂Cl₂, N₂O and NH₃ gaseous mixtures. The flows of SiH₂Cl₂ and (N₂O+NH₃) are 200 sccm and 160 sccm respectively. The deposited films have been characterized by ellipsometry, to model our silicon oxynitride SiOₓNy films. We have suggested two theoretical models (Maxwell Garnett and Bruggeman effective medium approximation (BEMA)). These models have been applied on silicon oxynitride considering the material as a heterogeneous medium formed by silicon oxide and silicon nitride. The model's validation was justified by the confrontation of theoretical spectra and those measured by ellipsometry. This result permits us to obtain the optical refractive coefficient of these films and their thickness. Ellipsometry analysis of the optical properties of the SiOₓNy films shows that the SiO₂ fraction decreases when the gaseous ratio NH₃/N₂O increases. Whereas the increase of this ratio leads to an increase of the silicon nitride Si3N4 fraction. The study also shows that the increasing gaseous ratio leads to a strong incorporation of nitrogen atoms in films. Also, the increasing of the SiOₓNy refractive coefficient until the SiO₂ value shows that this insulating material has good dielectric quality.Keywords: ellipsometry, silicon oxynitrde, model, refractive coefficient, effective medium
Procedia PDF Downloads 1912947 Theoretical Prediction on the Lifetime of Sessile Evaporating Droplet in Blade Cooling
Authors: Yang Shen, Yongpan Cheng, Jinliang Xu
Abstract:
The effective blade cooling is of great significance for improving the performance of turbine. The mist cooling emerges as the promising way compared with the transitional single-phase cooling. In the mist cooling, the injected droplet will evaporate rapidly, and cool down the blade surface due to the absorbed latent heat, hence the lifetime for evaporating droplet becomes critical for design of cooling passages for the blade. So far there have been extensive studies on the droplet evaporation, but usually the isothermal model is applied for most of the studies. Actually the surface cooling effect can affect the droplet evaporation greatly, it can prolong the droplet evaporation lifetime significantly. In our study, a new theoretical model for sessile droplet evaporation with surface cooling effect is built up in toroidal coordinate. Three evaporation modes are analyzed during the evaporation lifetime, include “Constant Contact Radius”(CCR) mode、“Constant Contact Angle”(CCA) mode and “stick-slip”(SS) mode. The dimensionless number E0 is introduced to indicate the strength of the evaporative cooling, it is defined based on the thermal properties of the liquid and the atmosphere. Our model can predict accurately the lifetime of evaporation by validating with available experimental data. Then the temporal variation of droplet volume, contact angle and contact radius are presented under CCR, CCA and SS mode, the following conclusions are obtained. 1) The larger the dimensionless number E0, the longer the lifetime of three evaporation cases is; 2) The droplet volume over time still follows “2/3 power law” in the CCA mode, as in the isothermal model without the cooling effect; 3) In the “SS” mode, the large transition contact angle can reduce the evaporation time in CCR mode, and increase the time in CCA mode, the overall lifetime will be increased; 4) The correction factor for predicting instantaneous volume of the droplet is derived to predict the droplet life time accurately. These findings may be of great significance to explore the dynamics and heat transfer of sessile droplet evaporation.Keywords: blade cooling, droplet evaporation, lifetime, theoretical analysis
Procedia PDF Downloads 14212946 Probing Mechanical Mechanism of Three-Hinge Formation on a Growing Brain: A Numerical and Experimental Study
Authors: Mir Jalil Razavi, Tianming Liu, Xianqiao Wang
Abstract:
Cortical folding, characterized by convex gyri and concave sulci, has an intrinsic relationship to the brain’s functional organization. Understanding the mechanism of the brain’s convoluted patterns can provide useful clues into normal and pathological brain function. During the development, the cerebral cortex experiences a noticeable expansion in volume and surface area accompanied by tremendous tissue folding which may be attributed to many possible factors. Despite decades of endeavors, the fundamental mechanism and key regulators of this crucial process remain incompletely understood. Therefore, to taking even a small role in unraveling of brain folding mystery, we present a mechanical model to find mechanism of 3-hinges formation in a growing brain that it has not been addressed before. A 3-hinge is defined as a gyral region where three gyral crests (hinge-lines) join. The reasons that how and why brain prefers to develop 3-hinges have not been answered very well. Therefore, we offer a theoretical and computational explanation to mechanism of 3-hinges formation in a growing brain and validate it by experimental observations. In theoretical approach, the dynamic behavior of brain tissue is examined and described with the aid of a large strain and nonlinear constitutive model. Derived constitute model is used in the computational model to define material behavior. Since the theoretical approach cannot predict the evolution of cortical complex convolution after instability, non-linear finite element models are employed to study the 3-hinges formation and secondary morphological folds of the developing brain. Three-dimensional (3D) finite element analyses on a multi-layer soft tissue model which mimics a small piece of the brain are performed to investigate the fundamental mechanism of consistent hinge formation in the cortical folding. Results show that after certain amount growth of cortex, mechanical model starts to be unstable and then by formation of creases enters to a new configuration with lower strain energy. By further growth of the model, formed shallow creases start to form convoluted patterns and then develop 3-hinge patterns. Simulation results related to 3-hinges in models show good agreement with experimental observations from macaque, chimpanzee and human brain images. These results have great potential to reveal fundamental principles of brain architecture and to produce a unified theoretical framework that convincingly explains the intrinsic relationship between cortical folding and 3-hinges formation. This achieved fundamental understanding of the intrinsic relationship between cortical folding and 3-hinges formation would potentially shed new insights into the diagnosis of many brain disorders such as schizophrenia, autism, lissencephaly and polymicrogyria.Keywords: brain, cortical folding, finite element, three hinge
Procedia PDF Downloads 23612945 The Role of Personality Characteristics and Psychological Harassment Behaviors Which Employees Are Exposed on Work Alienation
Authors: Hasan Serdar Öge, Esra Çiftçi, Kazım Karaboğa
Abstract:
The main purpose of the research is to address the role of psychological harassment behaviors (mobbing) to which employees are exposed and personality characteristics over work alienation. Research population was composed of the employees of Provincial Special Administration. A survey with four sections was created to measure variables and reach out the basic goals of the research. Correlation and step-wise regression analyses were performed to investigate the separate and overall effects of sub-dimensions of psychological harassment behaviors and personality characteristic on work alienation of employees. Correlation analysis revealed significant but weak relationships between work alienation and psychological harassment and personality characteristics. Step-wise regression analysis revealed also significant relationships between work alienation variable and assault to personality, direct negative behaviors (sub dimensions of mobbing) and openness (sub-dimension of personality characteristics). Each variable was introduced into the model step by step to investigate the effects of significant variables in explaining the variations in work alienation. While the explanation ratio of the first model was 13%, the last model including three variables had an explanation ratio of 24%.Keywords: alienation, five-factor personality characteristics, mobbing, psychological harassment, work alienation
Procedia PDF Downloads 40512944 Risk, Capital Buffers, and Bank Lending: The Adjustment of Euro Area Banks
Authors: Laurent Maurin, Mervi Toivanen
Abstract:
This paper estimates euro area banks’ internal target capital ratios and investigates whether banks’ adjustment to the targets have an impact on credit supply and holding of securities during the financial crisis in 2005-2011. Using data on listed banks and country-specific macro-variables a partial adjustment model is estimated in a panel context. The results indicate, firstly, that an increase in the riskiness of banks’ balance sheets influences positively on the target capital ratios. Secondly, the adjustment towards higher equilibrium capital ratios has a significant impact on banks’ assets. The impact is found to be more size-able on security holdings than on loans, thereby suggesting a pecking order.Keywords: Euro area, capital ratios, credit supply, partial adjustment model
Procedia PDF Downloads 44812943 Using Machine Learning as an Alternative for Predicting Exchange Rates
Authors: Pedro Paulo Galindo Francisco, Eli Dhadad Junior
Abstract:
This study addresses the Meese-Rogoff Puzzle by introducing the latest machine learning techniques as alternatives for predicting the exchange rates. Using RMSE as a comparison metric, Meese and Rogoff discovered that economic models are unable to outperform the random walk model as short-term exchange rate predictors. Decades after this study, no statistical prediction technique has proven effective in overcoming this obstacle; although there were positive results, they did not apply to all currencies and defined periods. Recent advancements in artificial intelligence technologies have paved the way for a new approach to exchange rate prediction. Leveraging this technology, we applied five machine learning techniques to attempt to overcome the Meese-Rogoff puzzle. We considered daily data for the real, yen, British pound, euro, and Chinese yuan against the US dollar over a time horizon from 2010 to 2023. Our results showed that none of the presented techniques were able to produce an RMSE lower than the Random Walk model. However, the performance of some models, particularly LSTM and N-BEATS were able to outperform the ARIMA model. The results also suggest that machine learning models have untapped potential and could represent an effective long-term possibility for overcoming the Meese-Rogoff puzzle.Keywords: exchage rate, prediction, machine learning, deep learning
Procedia PDF Downloads 3212942 Modeling the Moment of Resistance Generated by an Ore-Grinding Mill
Authors: Marinka Baghdasaryan, Tigran Mnoyan
Abstract:
The pertinence of modeling the moment of resistance generated by the ore-grinding mill is substantiated. Based on the ranking of technological indices obtained in the result of the survey among the specialists of several beneficiating plants, the factors determining the level of the moment of resistance generated by the mill are revealed. A priori diagram of the ranks is obtained in which the factors are arranged in the descending order of the impact degree on the level of the moment. The obtained model of the moment of resistance shows the technological character of the operation modes of the ore-grinding mill and can be used for improving the operation modes of the system motor-mill and preventing the abnormal mode of the drive synchronous motor.Keywords: model, abnormal mode, mill, correlation, moment of resistance, rotational speed
Procedia PDF Downloads 45112941 Effectiveness of Adopting Software Quality Frameworks in Software Organizations: A Qualitative Review
Authors: Sarah K. Amer, Nagwa Badr, Osman Ibrahim, Ahmed Hamad
Abstract:
This paper surveys the effectiveness of software process quality assurance frameworks, with some focus on Capability Maturity Model Integration (CMMI) - a framework that has become widely adopted in software organizations. The importance of quality improvement in software development, and the differences in the outcomes of quality framework implementation between Middle Eastern and North African (MENA-region) countries and non-MENA-region countries are discussed. The greatest challenges met in the MENA region are identified, with particular focus on Egypt and its rising software development industry.Keywords: software quality, software process improvement, software development methodologies, capability maturity model integration
Procedia PDF Downloads 35412940 Air Pollution and Respiratory-Related Restricted Activity Days in Tunisia
Authors: Mokhtar Kouki Inès Rekik
Abstract:
This paper focuses on the assessment of the air pollution and morbidity relationship in Tunisia. Air pollution is measured by ozone air concentration and the morbidity is measured by the number of respiratory-related restricted activity days during the 2-week period prior to the interview. Socioeconomic data are also collected in order to adjust for any confounding covariates. Our sample is composed by 407 Tunisian respondents; 44.7% are women, the average age is 35.2, near 69% are living in a house built after the 1980, and 27.8% have reported at least one day of respiratory-related restricted activity. The model consists on the regression of the number of respiratory-related restricted activity days on the air quality measure and the socioeconomic covariates. In order to correct for zero-inflation and heterogeneity, we estimate several models (Poisson, Negative binomial, Zero inflated Poisson, Poisson hurdle, Negative binomial hurdle and finite mixture Poisson models). Bootstrapping and post-stratification techniques are used in order to correct for any sample bias. According to the Akaike information criteria, the hurdle negative binomial model has the greatest goodness of fit. The main result indicates that, after adjusting for socioeconomic data, the ozone concentration increases the probability of positive number of restricted activity days.Keywords: bootstrapping, hurdle negbin model, overdispersion, ozone concentration, respiratory-related restricted activity days
Procedia PDF Downloads 25712939 Study on the Relationship between the Emission Property of Barium-Tungsten Cathode and Micro-Area Activity
Authors: Zhen Qin, Yufei Peng, Jianbei Li, Jidong Long
Abstract:
In order to study the activity of the coated aluminate barium-tungsten cathodes during activation, aging, poisoning and long-term use. Through a set of hot-cathode micro-area emission uniformity study device, we tested the micro-area emission performance of the cathode under different conditions. The change of activity of cathode micro-area was obtained. The influence of micro-area activity on the performance of the cathode was explained by the ageing model of barium-tungsten cathode. This helps to improve the design and process of the cathode and can point the way in finding the factors that affect life in the cathode operation.Keywords: barium-tungsten cathode, ageing model, micro-area emission, emission uniformity
Procedia PDF Downloads 40912938 Effects of Cash Transfers Mitigation Impacts in the Face of Socioeconomic External Shocks: Evidence from Egypt
Authors: Basma Yassa
Abstract:
Evidence on cash transfers’ effectiveness in mitigating macro and idiosyncratic shocks’ impacts has been mixed and is mostly concentrated in Latin America, Sub-Saharan Africa, and South Asia with very limited evidence from the MENA region. Yet conditional cash transfers schemes have been continually used, especially in Egypt, as the main social protection tool in response to the recent socioeconomic crises and macro shocks. We use 2 panel datasets and 1 cross-sectional dataset to estimate the effectiveness of cash transfers as a shock-mitigative mechanism in the Egyptian context. In this paper, the results from the different models (Panel Fixed Effects model and the Regression Discontinuity Design (RDD) model) confirm that micro and macro shocks lead to significant decline in several household-level welfare outcomes and that Takaful cash transfers have a significant positive impact in mitigating the negative shock impacts, especially on households’ debt incidence, debt levels, and asset ownership, but not necessarily on food, and non-food expenditure levels. The results indicate large positive significant effects on decreasing household incidence of debt by up to 12.4 percent and lowered the debt size by approximately 18 percent among Takaful beneficiaries compared to non-beneficiaries’. Similar evidence is found on asset ownership levels, as the RDD model shows significant positive effects on total asset ownership and productive asset ownership, but the model failed to detect positive impacts on per capita food and non-food expenditures. Further extensions are still in progress to compare the models’ results with the DID model results when using a nationally representative ELMPS panel data (2018/2024) rounds. Finally, our initial analysis suggests that conditional cash transfers are effective in buffering the negative shock impacts on certain welfare indicators even after successive macro-economic shocks in 2022 and 2023 in the Egyptian Context.Keywords: cash transfers, fixed effects, household welfare, household debt, micro shocks, regression discontinuity design
Procedia PDF Downloads 4612937 Bonding Characteristics Between FRP and Concrete Substrates
Authors: Houssam A. Toutanji, Meng Han
Abstract:
This study focuses on the development of a fracture mechanics based-model that predicts the debonding behavior of FRP strengthened RC beams. In this study, a database includes 351 concrete prisms bonded with FRP plates tested in single and double shear were prepared. The existing fracture-mechanics-based models are applied to this database. Unfortunately the properties of adhesive layer, especially a soft adhesive layer, used on the specimens in the existing studies were not always able to found. Thus, the new model’s proposal was based on fifteen newly conducted pullout tests and twenty four data selected from two independent existing studies with the application of a soft adhesive layers and the availability of adhesive properties.Keywords: carbon fiber composite materials, interface response, fracture characteristics, maximum shear stress, ultimate transferable load
Procedia PDF Downloads 26912936 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress
Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin
Abstract:
Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.Keywords: acceptance, coping strategies, stress, validation process
Procedia PDF Downloads 33912935 Determinants of Aggregate Electricity Consumption in Ghana: A Multivariate Time Series Analysis
Authors: Renata Konadu
Abstract:
In Ghana, electricity has become the main form of energy which all sectors of the economy rely on for their businesses. Therefore, as the economy grows, the demand and consumption of electricity also grow alongside due to the heavy dependence on it. However, since the supply of electricity has not increased to match the demand, there has been frequent power outages and load shedding affecting business performances. To solve this problem and advance policies to secure electricity in Ghana, it is imperative that those factors that cause consumption to increase be analysed by considering the three classes of consumers; residential, industrial and non-residential. The main argument, however, is that, export of electricity to other neighbouring countries should be included in the electricity consumption model and considered as one of the significant factors which can decrease or increase consumption. The author made use of multivariate time series data from 1980-2010 and econometric models such as Ordinary Least Squares (OLS) and Vector Error Correction Model. Findings show that GDP growth, urban population growth, electricity exports and industry value added to GDP were cointegrated. The results also showed that there is unidirectional causality from electricity export and GDP growth and Industry value added to GDP to electricity consumption in the long run. However, in the short run, there was found to be a directional causality among all the variables and electricity consumption. The results have useful implication for energy policy makers especially with regards to electricity consumption, demand, and supply.Keywords: electricity consumption, energy policy, GDP growth, vector error correction model
Procedia PDF Downloads 43712934 Modelling Mode Choice Behaviour Using Cloud Theory
Authors: Leah Wright, Trevor Townsend
Abstract:
Mode choice models are crucial instruments in the analysis of travel behaviour. These models show the relationship between an individual’s choice of transportation mode for a given O-D pair and the individual’s socioeconomic characteristics such as household size and income level, age and/or gender, and the features of the transportation system. The most popular functional forms of these models are based on Utility-Based Choice Theory, which addresses the uncertainty in the decision-making process with the use of an error term. However, with the development of artificial intelligence, many researchers have started to take a different approach to travel demand modelling. In recent times, researchers have looked at using neural networks, fuzzy logic and rough set theory to develop improved mode choice formulas. The concept of cloud theory has recently been introduced to model decision-making under uncertainty. Unlike the previously mentioned theories, cloud theory recognises a relationship between randomness and fuzziness, two of the most common types of uncertainty. This research aims to investigate the use of cloud theory in mode choice models. This paper highlights the conceptual framework of the mode choice model using cloud theory. Merging decision-making under uncertainty and mode choice models is state of the art. The cloud theory model is expected to address the issues and concerns with the nested logit and improve the design of mode choice models and their use in travel demand.Keywords: Cloud theory, decision-making, mode choice models, travel behaviour, uncertainty
Procedia PDF Downloads 38812933 1-g Shake Table Tests to Study the Impact of PGA on Foundation Settlement in Liquefiable Soil
Authors: Md. Kausar Alam, Mohammad Yazdi, Peiman Zogh, Ramin Motamed
Abstract:
The liquefaction-induced ground settlement has caused severe damage to structures in the past decades. However, the amount of building settlement caused by liquefaction is directly proportional to the intensity of the ground shaking. To reduce this soil liquefaction effect, it is essential to examine the influence of peak ground acceleration (PGA). Unfortunately, limited studies have been carried out on this issue. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada Reno to evaluate the influence of PGA with the same duration in liquefiable soil layers. The model is prepared based on a large-scale shake table with a scaling factor of N = 5, which has been conducted at the University of California, San Diego. The model ground has three soil layers with relative densities of 50% for crust, 30% for liquefiable, and 90% for dense layer, respectively. In addition, a shallow foundation is seated over an unsaturated crust layer. After preparing the model, the input motions having various peak ground accelerations (i.e., 0.16g, 0.25g, and 0.37g) for the same duration (10 sec) were applied. Based on the experimental results, when the PGA increased from 0.16g to 0.37g, the foundation increased from 20 mm to 100 mm. In addition, the expected foundation settlement based on the scaling factor was 25 mm, while the actual settlement for PGA 0.25g for 10 seconds was 50 mm.Keywords: foundation settlement, liquefaction, peak ground acceleration, shake table test
Procedia PDF Downloads 7712932 Numerical Analysis of a Pilot Solar Chimney Power Plant
Authors: Ehsan Gholamalizadeh, Jae Dong Chung
Abstract:
Solar chimney power plant is a feasible solar thermal system which produces electricity from the Sun. The objective of this study is to investigate buoyancy-driven flow and heat transfer through a built pilot solar chimney system called 'Kerman Project'. The system has a chimney with the height and diameter of 60 m and 3 m, respectively, and the average radius of its solar collector is about 20 m, and also its average collector height is about 2 m. A three-dimensional simulation was conducted to analyze the system, using computational fluid dynamics (CFD). In this model, radiative transfer equation was solved using the discrete ordinates (DO) radiation model taking into account a non-gray radiation behavior. In order to modelling solar irradiation from the sun’s rays, the solar ray tracing algorithm was coupled to the computation via a source term in the energy equation. The model was validated with comparing to the experimental data of the Manzanares prototype and also the performance of the built pilot system. Then, based on the numerical simulations, velocity and temperature distributions through the system, the temperature profile of the ground surface and the system performance were presented. The analysis accurately shows the flow and heat transfer characteristics through the pilot system and predicts its performance.Keywords: buoyancy-driven flow, computational fluid dynamics, heat transfer, renewable energy, solar chimney power plant
Procedia PDF Downloads 26212931 Drift-Wave Turbulence in a Tokamak Edge Plasma
Authors: S. Belgherras Bekkouche, T. Benouaz, S. M. A. Bekkouche
Abstract:
Tokamak plasma is far from having a stable background. The study of turbulent transport is an important part of the current research and advanced scenarios were devised to minimize it. To do this, we used a three-wave interaction model which allows to investigate the occurrence drift-wave turbulence driven by pressure gradients in the edge plasma of a tokamak. In order to simulate the energy redistribution among different modes, the growth/decay rates for the three waves was added. After a numerical simulation, we can determine certain aspects of the temporal dynamics exhibited by the model. Indeed for a wide range of the wave decay rate, an intermittent transition from periodic behavior to chaos is observed. Then, a control strategy of chaos was introduced with the aim of reducing or eliminating the weak turbulence.Keywords: wave interaction, plasma drift waves, wave turbulence, tokamak, edge plasma, chaos
Procedia PDF Downloads 55212930 Vehicular Emission Estimation of Islamabad by Using Copert-5 Model
Authors: Muhammad Jahanzaib, Muhammad Z. A. Khan, Junaid Khayyam
Abstract:
Islamabad is the capital of Pakistan with the population of 1.365 million people and with a vehicular fleet size of 0.75 million. The vehicular fleet size is growing annually by the rate of 11%. Vehicular emissions are major source of Black carbon (BC). In developing countries like Pakistan, most of the vehicles consume conventional fuels like Petrol, Diesel, and CNG. These fuels are the major emitters of pollutants like CO, CO2, NOx, CH4, VOCs, and particulate matter (PM10). Carbon dioxide and methane are the leading contributor to the global warming with a global share of 9-26% and 4-9% respectively. NOx is the precursor of nitrates which ultimately form aerosols that are noxious to human health. In this study, COPERT (Computer program to Calculate Emissions from Road Transport) was used for vehicular emission estimation in Islamabad. COPERT is a windows based program which is developed for the calculation of emissions from the road transport sector. The emissions were calculated for the year of 2016 include pollutants like CO, NOx, VOC, and PM and energy consumption. The different variable was input to the model for emission estimation including meteorological parameters, average vehicular trip length and respective time duration, fleet configuration, activity data, degradation factor, and fuel effect. The estimated emissions for CO, CH4, CO2, NOx, and PM10 were found to be 9814.2, 44.9, 279196.7, 3744.2 and 304.5 tons respectively.Keywords: COPERT Model, emission estimation, PM10, vehicular emission
Procedia PDF Downloads 26212929 Experimental Assessment of Artificial Flavors Production
Authors: M. Unis, S. Turky, A. Elalem, A. Meshrghi
Abstract:
The Esterification kinetics of acetic acid with isopropnol in the presence of sulfuric acid as a homogenous catalyst was studied with isothermal batch experiments at 60,70 and 80°C and at a different molar ratio of isopropnol to acetic acid. Investigation of kinetics of the reaction indicated that the low of molar ratio is favored for esterification reaction, this is due to the reaction is catalyzed by acid. The maximum conversion, approximately 60.6% was obtained at 80°C for molar ratio of 1:3 acid : alcohol. It was found that increasing temperature of the reaction, increases the rate constant and conversion at a certain mole ratio, that is due to the esterification is exothermic. The homogenous reaction has been described with simple power-law model. The chemical equilibrium combustion calculated from the kinetic model in agreement with the measured chemical equilibrium.Keywords: artificial flavors, esterification, chemical equilibria, isothermal
Procedia PDF Downloads 33512928 Modelling of Groundwater Resources for Al-Najaf City, Iraq
Authors: Hayder H. Kareem, Shunqi Pan
Abstract:
Groundwater is a vital water resource in many areas in the world, particularly in the Middle-East region where the water resources become scarce and depleting. Sustainable management and planning of the groundwater resources become essential and urgent given the impact of the global climate change. In the recent years, numerical models have been widely used to predict the flow pattern and assess the water resources security, as well as the groundwater quality affected by the contaminants transported. In this study, MODFLOW is used to study the current status of groundwater resources and the risk of water resource security in the region centred at Al-Najaf City, which is located in the mid-west of Iraq and adjacent to the Euphrates River. In this study, a conceptual model is built using the geologic and hydrogeologic collected for the region, together with the Digital Elevation Model (DEM) data obtained from the "Global Land Cover Facility" (GLCF) and "United State Geological Survey" (USGS) for the study area. The computer model is also implemented with the distributions of 69 wells in the area with the steady pro-defined hydraulic head along its boundaries. The model is then applied with the recharge rate (from precipitation) of 7.55 mm/year, given from the analysis of the field data in the study area for the period of 1980-2014. The hydraulic conductivity from the measurements at the locations of wells is interpolated for model use. The model is calibrated with the measured hydraulic heads at the locations of 50 of 69 wells in the domain and results show a good agreement. The standard-error-of-estimate (SEE), root-mean-square errors (RMSE), Normalized RMSE and correlation coefficient are 0.297 m, 2.087 m, 6.899% and 0.971 respectively. Sensitivity analysis is also carried out, and it is found that the model is sensitive to recharge, particularly when the rate is greater than (15mm/year). Hydraulic conductivity is found to be another parameter which can affect the results significantly, therefore it requires high quality field data. The results show that there is a general flow pattern from the west to east of the study area, which agrees well with the observations and the gradient of the ground surface. It is found that with the current operational pumping rates of the wells in the area, a dry area is resulted in Al-Najaf City due to the large quantity of groundwater withdrawn. The computed water balance with the current operational pumping quantity shows that the Euphrates River supplies water into the groundwater of approximately 11759 m3/day, instead of gaining water of 11178 m3/day from the groundwater if no pumping from the wells. It is expected that the results obtained from the study can provide important information for the sustainable and effective planning and management of the regional groundwater resources for Al-Najaf City.Keywords: Al-Najaf city, conceptual modelling, groundwater, unconfined aquifer, visual MODFLOW
Procedia PDF Downloads 21312927 Response Analysis of a Steel Reinforced Concrete High-Rise Building during the 2011 Tohoku Earthquake
Authors: Naohiro Nakamura, Takuya Kinoshita, Hiroshi Fukuyama
Abstract:
The 2011 off The Pacific Coast of Tohoku Earthquake caused considerable damage to wide areas of eastern Japan. A large number of earthquake observation records were obtained at various places. To design more earthquake-resistant buildings and improve earthquake disaster prevention, it is necessary to utilize these data to analyze and evaluate the behavior of a building during an earthquake. This paper presents an earthquake response simulation analysis (hereafter a seismic response analysis) that was conducted using data recorded during the main earthquake (hereafter the main shock) as well as the earthquakes before and after it. The data were obtained at a high-rise steel-reinforced concrete (SRC) building in the bay area of Tokyo. We first give an overview of the building, along with the characteristics of the earthquake motion and the building during the main shock. The data indicate that there was a change in the natural period before and after the earthquake. Next, we present the results of our seismic response analysis. First, the analysis model and conditions are shown, and then, the analysis result is compared with the observational records. Using the analysis result, we then study the effect of soil-structure interaction on the response of the building. By identifying the characteristics of the building during the earthquake (i.e., the 1st natural period and the 1st damping ratio) by the Auto-Regressive eXogenous (ARX) model, we compare the analysis result with the observational records so as to evaluate the accuracy of the response analysis. In this study, a lumped-mass system SR model was used to conduct a seismic response analysis using observational data as input waves. The main results of this study are as follows: 1) The observational records of the 3/11 main shock put it between a level 1 and level 2 earthquake. The result of the ground response analysis showed that the maximum shear strain in the ground was about 0.1% and that the possibility of liquefaction occurring was low. 2) During the 3/11 main shock, the observed wave showed that the eigenperiod of the building became longer; this behavior could be generally reproduced in the response analysis. This prolonged eigenperiod was due to the nonlinearity of the superstructure, and the effect of the nonlinearity of the ground seems to have been small. 3) As for the 4/11 aftershock, a continuous analysis in which the subject seismic wave was input after the 3/11 main shock was input was conducted. The analyzed values generally corresponded well with the observed values. This means that the effect of the nonlinearity of the main shock was retained by the building. It is important to consider this when conducting the response evaluation. 4) The first period and the damping ratio during a vibration were evaluated by an ARX model. Our results show that the response analysis model in this study is generally good at estimating a change in the response of the building during a vibration.Keywords: ARX model, response analysis, SRC building, the 2011 off the Pacific Coast of Tohoku Earthquake
Procedia PDF Downloads 16412926 Input Data Balancing in a Neural Network PM-10 Forecasting System
Authors: Suk-Hyun Yu, Heeyong Kwon
Abstract:
Recently PM-10 has become a social and global issue. It is one of major air pollutants which affect human health. Therefore, it needs to be forecasted rapidly and precisely. However, PM-10 comes from various emission sources, and its level of concentration is largely dependent on meteorological and geographical factors of local and global region, so the forecasting of PM-10 concentration is very difficult. Neural network model can be used in the case. But, there are few cases of high concentration PM-10. It makes the learning of the neural network model difficult. In this paper, we suggest a simple input balancing method when the data distribution is uneven. It is based on the probability of appearance of the data. Experimental results show that the input balancing makes the neural networks’ learning easy and improves the forecasting rates.Keywords: artificial intelligence, air quality prediction, neural networks, pattern recognition, PM-10
Procedia PDF Downloads 23212925 Enhancing the Resilience of Combat System-Of-Systems Under Certainty and Uncertainty: Two-Phase Resilience Optimization Model and Deep Reinforcement Learning-Based Recovery Optimization Method
Authors: Xueming Xu, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge
Abstract:
A combat system-of-systems (CSoS) comprises various types of functional combat entities that interact to meet corresponding task requirements in the present and future. Enhancing the resilience of CSoS holds significant military value in optimizing the operational planning process, improving military survivability, and ensuring the successful completion of operational tasks. Accordingly, this research proposes an integrated framework called CSoS resilience enhancement (CSoSRE) to enhance the resilience of CSoS from a recovery perspective. Specifically, this research presents a two-phase resilience optimization model to define a resilience optimization objective for CSoS. This model considers not only task baseline, recovery cost, and recovery time limit but also the characteristics of emergency recovery and comprehensive recovery. Moreover, the research extends it from the deterministic case to the stochastic case to describe the uncertainty in the recovery process. Based on this, a resilience-oriented recovery optimization method based on deep reinforcement learning (RRODRL) is proposed to determine a set of entities requiring restoration and their recovery sequence, thereby enhancing the resilience of CSoS. This method improves the deep Q-learning algorithm by designing a discount factor that adapts to changes in CSoS state at different phases, simultaneously considering the network’s structural and functional characteristics within CSoS. Finally, extensive experiments are conducted to test the feasibility, effectiveness and superiority of the proposed framework. The obtained results offer useful insights for guiding operational recovery activity and designing a more resilient CSoS.Keywords: combat system-of-systems, resilience optimization model, recovery optimization method, deep reinforcement learning, certainty and uncertainty
Procedia PDF Downloads 1612924 Modeling The Deterioration Of Road Bridges At The Provincial Level In Laos
Authors: Hatthaphone Silimanotham, Michael Henry
Abstract:
The effective maintenance of road bridge infrastructure is becoming a widely researched topic in the civil engineering field. Deterioration is one of the main issues in bridge performance, and it is necessary to understand how bridges deteriorate to optimally plan budget allocation for bridge maintenance. In Laos, many bridges are in a deteriorated state, which may affect the performance of the bridge. Due to bridge deterioration, the Ministry of Public Works and Transport is interested in the deterioration model to allocate the budget efficiently and support the bridge maintenance planning. A deterioration model can be used to predict the bridge condition in the future based on the observed behavior in the past. This paper analyzes the available inspection data of road bridges on the road classifications network to build deterioration prediction models for the main bridge type found at the provincial level (concrete slab, concrete girder, and steel truss) using probabilistic deterioration modeling by linear regression method. The analysis targets there has three bridge types in the 18 provinces of Laos and estimates the bridge deterioration rating for evaluating the bridge's remaining life. This research thus considers the relationship between the service period and the bridge condition to represent the probability of bridge condition in the future. The results of the study can be used for a variety of bridge management tasks, including maintenance planning, budgeting, and evaluating bridge assets.Keywords: deterioration model, bridge condition, bridge management, probabilistic modeling
Procedia PDF Downloads 15912923 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 18612922 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder
Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen
Abstract:
Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.Keywords: natural language inference, explanation generation, variational auto-encoder, generative model
Procedia PDF Downloads 15112921 Bridging the Gap through New Media Technology Acceptance: Exploring Chinese Family Business Culture
Authors: Farzana Sharmin, Mohammad Tipu Sultan
Abstract:
Emerging new media technology such as social media and social networking sites have changed the family business dynamics in Eastern Asia. The family business trends in China has been developed at an exponential rate towards technology. In the last two decades, many of this family business has succeeded in becoming major players in the Chinese and world economy. But there are a very few availabilities of literature on Chinese context regarding social media acceptance in terms of the family business. Therefore, this study has tried to cover the gap between culture and new media technology to understand the attitude of Chinese young entrepreneurs’ towards the family business. This paper focused on two cultural dimensions (collectivism, long-term orientation), which are adopted from Greet Hofstede’s. Additionally perceived usefulness and ease of use adopted from the Technology Acceptance Model (TAM) to explore the actual behavior of technology acceptance for the family business. A quantitative survey method (n=275) used to collect data Chinese family business owners' in Shanghai. The inferential statistical analysis was applied to extract trait factors, and verification of the model, respectively. The research results found that using social media for family business promotion has highly influenced by cultural values (collectivism and long-term orientation). The theoretical contribution of this research may also assist policymakers and practitioners of other developing countries to advertise and promote the family business through social media.Keywords: China, cultural dimensions, family business, technology acceptance model, TAM
Procedia PDF Downloads 14712920 Static and Dynamic Hand Gesture Recognition Using Convolutional Neural Network Models
Authors: Keyi Wang
Abstract:
Similar to the touchscreen, hand gesture based human-computer interaction (HCI) is a technology that could allow people to perform a variety of tasks faster and more conveniently. This paper proposes a training method of an image-based hand gesture image and video clip recognition system using a CNN (Convolutional Neural Network) with a dataset. A dataset containing 6 hand gesture images is used to train a 2D CNN model. ~98% accuracy is achieved. Furthermore, a 3D CNN model is trained on a dataset containing 4 hand gesture video clips resulting in ~83% accuracy. It is demonstrated that a Cozmo robot loaded with pre-trained models is able to recognize static and dynamic hand gestures.Keywords: deep learning, hand gesture recognition, computer vision, image processing
Procedia PDF Downloads 13912919 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms
Authors: Rikson Gultom
Abstract:
Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.Keywords: abusive language, hate speech, machine learning, optimization, social media
Procedia PDF Downloads 128