Search results for: reduced order macro models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22291

Search results for: reduced order macro models

21691 An Optimization Algorithm for Reducing the Liquid Oscillation in the Moving Containers

Authors: Reza Babajanivalashedi, Stefania Lo Feudo, Jean-Luc Dion

Abstract:

Liquid sloshing is a crucial problem for the dynamic of moving containers in the packaging industries. Sloshing issues have been so far mainly modeled within the framework of fluid dynamics or by using equivalent mechanical models with different kinds of movements and shapes of containers. Nevertheless, these approaches do not allow to determinate the shape of the free surface of the liquid in case of the irregular shape of the moving containers, so that experimental measurements may be required. If there is too much slosh in the moving tank, the liquid can be splashed out on the packages. So, the free surface oscillation must be controlled/reduced to eliminate the splashing. The purpose of this research is to propose an optimization algorithm for finding an optimum command law to reduce surface elevation. In the first step, the free surface of the liquid is simulated based on the separation variable and weak formulation models. Then Genetic and Gradient algorithms are developed for finding the optimum command law. The optimum command law is compared with existing command laws, and the results show that there is a significant difference in surface oscillation between optimum and existing command laws. This algorithm is applicable for different varieties of bottles in case of using the camera for detecting the liquid elevation, and it can produce new command laws for different kinds of tanks to reduce the surface oscillation and remove the splashing phenomenon.

Keywords: sloshing phenomenon, separation variables, weak formulation, optimization algorithm, command law

Procedia PDF Downloads 127
21690 Spatial Econometric Approaches for Count Data: An Overview and New Directions

Authors: Paula Simões, Isabel Natário

Abstract:

This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.

Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data

Procedia PDF Downloads 572
21689 Electrochemical Deposition of Pb and PbO2 on Polymer Composites Electrodes

Authors: A. Merzouki, N. Haddaoui

Abstract:

Polymers have a large reputation as electric insulators. These materials are characterized by weak weight, reduced price and a large domain of physical and chemical properties. They conquered new application domains that were until a recent past the exclusivity of metals. In this work, we used some composite materials (polymers/conductive fillers), as electrodes and we try to cover them with metallic lead layers in order to use them as courant collector grids in lead-acid battery plates.

Keywords: electrodeposition, polymer composites, carbon black, acetylene black

Procedia PDF Downloads 437
21688 Factors Affecting Special Core Analysis Resistivity Parameters

Authors: Hassan Sbiga

Abstract:

Laboratory measurements methods were undertaken on core samples selected from three different fields (A, B, and C) from the Nubian Sandstone Formation of the central graben reservoirs in Libya. These measurements were conducted in order to determine the factors which affect resistivity parameters, and to investigate the effect of rock heterogeneity and wettability on these parameters. This included determining the saturation exponent (n) in the laboratory at two stages. The first stage was before wettability measurements were conducted on the samples, and the second stage was after the wettability measurements in order to find any effect on the saturation exponent. Another objective of this work was to quantify experimentally pores and porosity types (macro- and micro-porosity), which have an affect on the electrical properties, by integrating capillary pressure curves with other routine and special core analysis. These experiments were made for the first time to obtain a relation between pore size distribution and saturation exponent n. Changes were observed in the formation resistivity factor and cementation exponent due to ambient conditions and changes of overburden pressure. The cementation exponent also decreased from GHE-5 to GHE-8. Changes were also observed in the saturation exponent (n) and water saturation (Sw) before and after wettability measurement. Samples with an oil-wet tendency have higher irreducible brine saturation and higher Archie saturation exponent values than samples with an uniform water-wet surface. The experimental results indicate that there is a good relation between resistivity and pore type depending on the pore size. When oil begins to penetrate micro-pore systems in measurements of resistivity index versus brine saturation (after wettability measurement), a significant change in slope of the resistivity index relationship occurs.

Keywords: part of thesis, cementation, wettability, resistivity

Procedia PDF Downloads 230
21687 Creation of GaxCo1-xZnSe0.4 (x = 0.1, 0.3, 0.5) Nanoparticles Using Pulse Laser Ablation Method

Authors: Yong Pan, Li Wang, Xue Qiong Su, Dong Wen Gao

Abstract:

To date, nanomaterials have received extensive attention over the years because of their wide application. Various nanomaterials such as nanoparticles, nanowire, nanoring, nanostars and other nanostructures have begun to be systematically studied. The preparation of these materials by chemical methods is not only costly, but also has a long cycle and high toxicity. At the same time, preparation of nanoparticles of multi-doped composites has been limited due to the special structure of the materials. In order to prepare multi-doped composites with the same structure as macro-materials and simplify the preparation method, the GaxCo1-xZnSe0.4 (x = 0.1, 0.3, 0.5) nanoparticles are prepared by Pulse Laser Ablation (PLA) method. The particle component and structure are systematically investigated by X-ray diffraction (XRD) and Raman spectra, which show that the success of our preparation and the same concentration between nanoparticles (NPs) and target. Morphology of the NPs characterized by Transmission Electron Microscopy (TEM) indicates the circular-shaped particles in preparation. Fluorescence properties are reflected by PL spectra, which demonstrate the best performance in concentration of Ga0.3Co0.3ZnSe0.4. Therefore, all the results suggest that PLA is promising to prepare the multi-NPs since it can modulate performance of NPs.

Keywords: PLA, physics, nanoparticles, multi-doped

Procedia PDF Downloads 152
21686 Downside Risk Analysis of the Nigerian Stock Market: A Value at Risk Approach

Authors: Godwin Chigozie Okpara

Abstract:

This paper using standard GARCH, EGARCH, and TARCH models on day of the week return series (of 246 days) from the Nigerian Stock market estimated the model variants’ VaR. An asymmetric return distribution and fat-tail phenomenon in financial time series were considered by estimating the models with normal, student t and generalized error distributions. The analysis based on Akaike Information Criterion suggests that the EGARCH model with student t innovation distribution can furnish more accurate estimate of VaR. In the light of this, we apply the likelihood ratio tests of proportional failure rates to VaR derived from EGARCH model in order to determine the short and long positions VaR performances. The result shows that as alpha ranges from 0.05 to 0.005 for short positions, the failure rate significantly exceeds the prescribed quintiles while it however shows no significant difference between the failure rate and the prescribed quantiles for long positions. This suggests that investors and portfolio managers in the Nigeria stock market have long trading position or can buy assets with concern on when the asset prices will fall. Precisely, the VaR estimates for the long position range from -4.7% for 95 percent confidence level to -10.3% for 99.5 percent confidence level.

Keywords: downside risk, value-at-risk, failure rate, kupiec LR tests, GARCH models

Procedia PDF Downloads 426
21685 The Effects of Different Agroforestry Practices on Glomalin Related Soil Protein, Soil Aggregate Stability and Organic Carbon-Association with Soil Aggregates in Southern Ethiopia

Authors: Nebiyou Masebo

Abstract:

The severities of land degradation in southern Ethiopia has been increasing due to high population density, replacement of an age-old agroforestry (AF) based agricultural system with monocropping. The consequences of these activities combined with climate change have been impaired soil biota, soil organic carbon (SOC), soil glomalin, soil aggregation and aggregate stability. The AF systems could curb these problems due it is an ecologically and economically sustainable. This study was aimed to determine the effect of agroforestry practices (AFPs) on soil glomalin, soil aggregate stability (SAS), and aggregate association with SOC. Soil samples (from two depth level: 0-30 & 30-60 cm) and woody species were collected from homegarden based agroforestry practice (HAFP), cropland based agroforestry practice (ClAFP), woodlot based agroforestry practice (WlAFP) and trees on soil and water conservation based agroforestry practice (TSWAFP) using systematic sampling. In this study, both easily extractable glomalin related soil protein (EEGRSP) and total glomalin related soil protein (TGRSP) were significantly (p<0.05) higher in HAFP compared to others, with decreasing order HAFP>WlAFP>TSWAFP>ClAFP at upper surface but in subsurface in decreasing order: WlAFP>HAFP>TSWAFP>ClAFP. On the other hand, the macroaggregate fraction of AFPs ranged from 22.64-36.51% where the lowest was in ClAFP, while the highest was in HAFP, moreover, the order for subsurface was also the same but SAS decreased with the increasing of soil depths. The micro-aggregate fraction ranged from 15.9–24.56%, where the lowest was in HAFP, but the highest was in ClAFP. Besides, the association of OC with both macro-and micro-aggregates was greatest in HAFP and followed by WlAFP. The findings also showed that both glomalin and SAS were significantly high with woody species diversity and richness. Thus, AFP with good management practice can play role on maintenance of biodiversity, glomalin content and other soil quality parameters with future implications for a stable ecosystem.

Keywords: agroforestry, soil aggregate stability, glomalin, aggregate-associated carbon, HAFP, ClAFP, WlAFP, TSWAFP.

Procedia PDF Downloads 77
21684 The Necessity to Standardize Procedures of Providing Engineering Geological Data for Designing Road and Railway Tunneling Projects

Authors: Atefeh Saljooghi Khoshkar, Jafar Hassanpour

Abstract:

One of the main problems of the design stage relating to many tunneling projects is the lack of an appropriate standard for the provision of engineering geological data in a predefined format. In particular, this is more reflected in highway and railroad tunnel projects in which there is a number of tunnels and different professional teams involved. In this regard, comprehensive software needs to be designed using the accepted methods in order to help engineering geologists to prepare standard reports, which contain sufficient input data for the design stage. Regarding this necessity, applied software has been designed using macro capabilities and Visual Basic programming language (VBA) through Microsoft Excel. In this software, all of the engineering geological input data, which are required for designing different parts of tunnels, such as discontinuities properties, rock mass strength parameters, rock mass classification systems, boreability classification, the penetration rate, and so forth, can be calculated and reported in a standard format.

Keywords: engineering geology, rock mass classification, rock mechanic, tunnel

Procedia PDF Downloads 60
21683 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 157
21682 Phase Behavior Modelling of Libyan Near-Critical Gas-Condensate Field

Authors: M. Khazam, M. Altawil, A. Eljabri

Abstract:

Fluid properties in states near a vapor-liquid critical region are the most difficult to measure and to predict with EoS models. The principal model difficulty is that near-critical property variations do not follow the same mathematics as at conditions far away from the critical region. Libyan NC98 field in Sirte basin is a typical example of near critical fluid characterized by high initial condensate gas ratio (CGR) greater than 160 bbl/MMscf and maximum liquid drop-out of 25%. The objective of this paper is to model NC98 phase behavior with the proper selection of EoS parameters and also to model reservoir depletion versus gas cycling option using measured PVT data and EoS Models. The outcomes of our study revealed that, for accurate gas and condensate recovery forecast during depletion, the most important PVT data to match are the gas phase Z-factor and C7+ fraction as functions of pressure. Reasonable match, within -3% error, was achieved for ultimate condensate recovery at abandonment pressure of 1500 psia. The smooth transition from gas-condensate to volatile oil was fairly simulated by the tuned PR-EoS. The predicted GOC was approximately at 14,380 ftss. The optimum gas cycling scheme, in order to maximize condensate recovery, should not be performed at pressures less than 5700 psia. The contribution of condensate vaporization for such field is marginal, within 8% to 14%, compared to gas-gas miscible displacement. Therefore, it is always recommended, if gas recycle scheme to be considered for this field, to start it at the early stage of field development.

Keywords: EoS models, gas-condensate, gas cycling, near critical fluid

Procedia PDF Downloads 305
21681 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 135
21680 Emancipation through the Inclusion of Civil Society in Contemporary Peacebuilding: A Case Study of Peacebuilding Efforts in Colombia

Authors: D. Romero Espitia

Abstract:

Research on peacebuilding has taken a critical turn into examining the neoliberal and hegemonic conception of peace operations. Alternative peacebuilding models have been analyzed, but the scholarly discussion fails to bring them together or form connections between them. The objective of this paper is to rethink peacebuilding by extracting the positive aspects of the various peacebuilding models, connecting them with the local context, and therefore promote emancipation in contemporary peacebuilding efforts. Moreover, local ownership has been widely labelled as one, if not the core principle necessary for a successful peacebuilding project. Yet, definitions of what constitutes the 'local' remain debated. Through a qualitative review of literature, this paper unpacks the contemporary conception of peacebuilding in nexus with 'local ownership' as manifested through civil society. Using Colombia as a case study, this paper argues that a new peacebuilding framework, one that reconsiders the terms of engagement between international and national actors, is needed in order to foster effective peacebuilding efforts in contested transitional states.

Keywords: civil society, Colombia, emancipation, peacebuilding

Procedia PDF Downloads 119
21679 Oil Exploitation, Environmental Injustice and Decolonial Nonrecognition: Exploring the Historical Accounts of Host Communities in South-Eastern Nigeria

Authors: Ejikeme Johnson Kanu

Abstract:

This research explores the environmental justice of host communities in south-eastern Nigeria whose source of livelihood has been destroyed due to oil exploitation. Environmental justice scholarship in the area often adopts Western liberal ideology from a more macro level synthesis (Niger Delta). This study therefore explored the sufficiency or otherwise of the adoption of Western liberal ideology in the framing of environmental justice (EJ) in the area which neglects the impact of colonialism and cultural domination. Mixed archival research supplemented by secondary analysis guided this study. Drawing from data analysis, the paper first argues that micro-level studies are required to either validate or invalidate the studies done at the macro-level (Niger Delta) which has often been used to generalise around environmental injustice done within the host communities even though the communities (South-eastern) differ significantly from (South-south) in terms of language, culture, socio-political and economic formation which indicate that the drivers of EJ may differ among them. Secondly, the paper argues that EJ framing from the Western worldview adopted in the study area is insufficient to understand environmental injustice suffered in the study area and there is the need for environmental justice framing that will consider the impact of colonialism and nonrecognition of the cultural identities of the host communities which breed environmental justice. The study, therefore, concludes by drawing from decolonial theory to consider how the framing of EJ would move beyond the western liberal EJ to Indigenous environmental justice.

Keywords: environmental justice, culture, decolonial, nonrecognition, indigenous environmental justice

Procedia PDF Downloads 121
21678 Designing a Crowbar for Women: An Ergonomic Approach

Authors: Prakash Chandra Dhara, Rupa Maity, Mousumi Chatterjee

Abstract:

Crowbars are used for the gardening purpose. The same tools are used by both male and female gardeners. The existing crowbars are suitable for the female gardeners. The present study was aimed to design a crowbar, which was required to use by the women for the gardening purpose, from the viewpoints of ergonomics. The study was carried out on 50 women in different villages of Howrah districts in West Bengal state. Different models of existing crowbars which were commonly used by the women were collected and evaluated by examining their shape and size. The problems of using existing crowbar were assessed by direct observation during its operation. The musculoskeletal disorder of the subjects for using the crowbar was evaluated by modified Nordic questionnaire method. The anthropometric dimensions, especially hand dimension, of the subjects were taken in standardized static conditions. Considering the problems of using the existing crowbars some design concepts were developed and accordingly three prototypes models (P1, P2, P3) of crowbar were prepared for designing of a modified crowbar for women. Psychophysical analysis of those prototypes was made by paired comparison tests. In the above test subjective preference for different characteristics of the crowbar, e.g., length, weight, length and breadth of the blade, handle diameter, position of the handle, were determined. From the results of the paired comparison test and percentile values of hand dimensions, a modified design of crowbar was suggested. The prototype model P1 possessed more preferred characteristics of the tool than that of other prototype models. In the final design, the weight of the tool and length of the blade was reduced from that of the existing crowbar. Other dimensions were also changed. Two handles were suggested in the redesigned tool for better gripping and operation. The modified crowbar was evaluated by studying the body joint angles, viz., wrist, shoulder and elbow, for assessing the suitability of the design. It was concluded that the redesigned crowbar was suitable for women’s use.

Keywords: body dimension, crowbar, ergo-design, women, hand anthropometry

Procedia PDF Downloads 232
21677 Mathematical Modeling of Carotenoids and Polyphenols Content of Faba Beans (Vicia faba L.) during Microwave Treatments

Authors: Ridha Fethi Mechlouch, Ahlem Ayadi, Ammar Ben Brahim

Abstract:

Given the importance of the preservation of polyphenols and carotenoids during thermal processing, we attempted in this study to investigate the variation of these two parameters in faba beans during microwave treatment using different power densities (1; 2; and 3W/g), then to perform a mathematical modeling by using non-linear regression analysis to evaluate the models constants. The variation of the carotenoids and polyphenols ratio of faba beans and the models are tested to validate the experimental results. Exponential models were found to be suitable to describe the variation of caratenoid ratio (R²= 0.945, 0.927 and 0.946) for power densities (1; 2; and 3W/g) respectively, and polyphenol ratio (R²= 0.931, 0.989 and 0.982) for power densities (1; 2; and 3W/g) respectively. The effect of microwave power density Pd(W/g) on the coefficient k of models were also investigated. The coefficient is highly correlated (R² = 1) and can be expressed as a polynomial function.

Keywords: microwave treatment, power density, carotenoid, polyphenol, modeling

Procedia PDF Downloads 243
21676 Thorium Extraction with Cyanex272 Coated Magnetic Nanoparticles

Authors: Afshin Shahbazi, Hadi Shadi Naghadeh, Ahmad Khodadadi Darban

Abstract:

In the Magnetically Assisted Chemical Separation (MACS) process, tiny ferromagnetic particles coated with solvent extractant are used to selectively separate radionuclides and hazardous metals from aqueous waste streams. The contaminant-loaded particles are then recovered from the waste solutions using a magnetic field. In the present study, Cyanex272 or C272 (bis (2,4,4-trimethylpentyl) phosphinic acid) coated magnetic particles are being evaluated for the possible application in the extraction of Thorium (IV) from nuclear waste streams. The uptake behaviour of Th(IV) from nitric acid solutions was investigated by batch studies. Adsorption of Thorium (IV) from aqueous solution onto adsorbent was investigated in a batch system. Adsorption isotherm and adsorption kinetic studies of Thorium (IV) onto nanoparticles coated Cyanex272 were carried out in a batch system. The factors influencing Thorium (IV) adsorption were investigated and described in detail, as a function of the parameters such as initial pH value, contact time, adsorbent mass, and initial Thorium (IV) concentration. Magnetically Assisted Chemical Separation (MACS) process adsorbent showed best results for the fast adsorption of Th (IV) from aqueous solution at aqueous phase acidity value of 0.5 molar. In addition, more than 80% of Th (IV) was removed within the first 2 hours, and the time required to achieve the adsorption equilibrium was only 140 minutes. Langmuir and Frendlich adsorption models were used for the mathematical description of the adsorption equilibrium. Equilibrium data agreed very well with the Langmuir model, with a maximum adsorption capacity of 48 mg.g-1. Adsorption kinetics data were tested using pseudo-first-order, pseudo-second-order and intra-particle diffusion models. Kinetic studies showed that the adsorption followed a pseudo-second-order kinetic model, indicating that the chemical adsorption was the rate-limiting step.

Keywords: Thorium (IV) adsorption, MACS process, magnetic nanoparticles, Cyanex272

Procedia PDF Downloads 314
21675 Exchange Rate Forecasting by Econometric Models

Authors: Zahid Ahmad, Nosheen Imran, Nauman Ali, Farah Amir

Abstract:

The objective of the study is to forecast the US Dollar and Pak Rupee exchange rate by using time series models. For this purpose, daily exchange rates of US and Pakistan for the period of January 01, 2007 - June 2, 2017, are employed. The data set is divided into in sample and out of sample data set where in-sample data are used to estimate as well as forecast the models, whereas out-of-sample data set is exercised to forecast the exchange rate. The ADF test and PP test are used to make the time series stationary. To forecast the exchange rate ARIMA model and GARCH model are applied. Among the different Autoregressive Integrated Moving Average (ARIMA) models best model is selected on the basis of selection criteria. Due to the volatility clustering and ARCH effect the GARCH (1, 1) is also applied. Results of analysis showed that ARIMA (0, 1, 1 ) and GARCH (1, 1) are the most suitable models to forecast the future exchange rate. Further the GARCH (1,1) model provided the volatility with non-constant conditional variance in the exchange rate with good forecasting performance. This study is very useful for researchers, policymakers, and businesses for making decisions through accurate and timely forecasting of the exchange rate and helps them in devising their policies.

Keywords: exchange rate, ARIMA, GARCH, PAK/USD

Procedia PDF Downloads 541
21674 Characterization of Cement Concrete Pavement

Authors: T. B. Anil Kumar, Mallikarjun Hiremath, V. Ramachandra

Abstract:

The present experimental investigation deals with the quality performance analysis of cement concrete with 0, 15 and 25% fly ash and 0, 0.2, 0.4 and 0.6% of polypropylene fibers by weight of cement. The various test parameters like workability, unit weight, compressive strength, flexural strength, split tensile strength and abrasion resistance are detailed in the analysis. The compressive strength of M40 grade concrete attains higher value by the replacement of cement by 15% fly ash and at 0.4% PP after 28 and 56 days of curing. Higher flexural strength of concrete was observed by the replacement of cement by 15% fly ash with 0.2% PP after 28 and 56 days of curing. Similarly, split tensile strength value also increases and attains higher value by the replacement of cement by 15% fly ash with 0.4% PP after 28 and 56 days of curing. The percentage of wear gets reduced to 30 to 33% by the addition of fibers at 0.2%, 0.4% and 0.6% in cement concrete replaced by 15 and 25% fly ash. Hence, it is found that the pavement thickness gets reduced up to 20% when compared with plain concrete slab by the 15% fly ash treated with 0.2% PP fibers and also reduced up to 27% of surface course cost.

Keywords: cement, fly ash, polypropylene fiber, pavement design, cost analysis

Procedia PDF Downloads 380
21673 Study on Flexible Diaphragm In-Plane Model of Irregular Multi-Storey Industrial Plant

Authors: Cheng-Hao Jiang, Mu-Xuan Tao

Abstract:

The rigid diaphragm model may cause errors in the calculation of internal forces due to neglecting the in-plane deformation of the diaphragm. This paper thus studies the effects of different diaphragm in-plane models (including in-plane rigid model and in-plane flexible model) on the seismic performance of structures. Taking an actual industrial plant as an example, the seismic performance of the structure is predicted using different floor diaphragm models, and the analysis errors caused by different diaphragm in-plane models including deformation error and internal force error are calculated. Furthermore, the influence of the aspect ratio on the analysis errors is investigated. Finally, the code rationality is evaluated by assessing the analysis errors of the structure models whose floors were determined as rigid according to the code’s criterion. It is found that different floor models may cause great differences in the distribution of structural internal forces, and the current code may underestimate the influence of the floor in-plane effect.

Keywords: industrial plant, diaphragm, calculating error, code rationality

Procedia PDF Downloads 126
21672 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods

Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie

Abstract:

The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.

Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence

Procedia PDF Downloads 228
21671 Three Dimensional Computational Fluid Dynamics Simulation of Wall Condensation inside Inclined Tubes

Authors: Amirhosein Moonesi Shabestary, Eckhard Krepper, Dirk Lucas

Abstract:

The current PhD project comprises CFD-modeling and simulation of condensation and heat transfer inside horizontal pipes. Condensation plays an important role in emergency cooling systems of reactors. The emergency cooling system consists of inclined horizontal pipes which are immersed in a tank of subcooled water. In the case of an accident the water level in the core is decreasing, steam comes in the emergency pipes, and due to the subcooled water around the pipe, this steam will start to condense. These horizontal pipes act as a strong heat sink which is responsible for a quick depressurization of the reactor core when any accident happens. This project is defined in order to model all these processes which happening in the emergency cooling systems. The most focus of the project is on detection of different morphologies such as annular flow, stratified flow, slug flow and plug flow. This project is an ongoing project which has been started 1 year ago in Helmholtz Zentrum Dresden Rossendorf (HZDR), Fluid Dynamics department. In HZDR most in cooperation with ANSYS different models are developed for modeling multiphase flows. Inhomogeneous MUSIG model considers the bubble size distribution and is used for modeling small-scaled dispersed gas phase. AIAD (Algebraic Interfacial Area Density Model) is developed for detection of the local morphology and corresponding switch between them. The recent model is GENTOP combines both concepts. GENTOP is able to simulate co-existing large-scaled (continuous) and small-scaled (polydispersed) structures. All these models are validated for adiabatic cases without any phase change. Therefore, the start point of the current PhD project is using the available models and trying to integrate phase transition and wall condensing models into them. In order to simplify the idea of condensation inside horizontal tubes, 3 steps have been defined. The first step is the investigation of condensation inside a horizontal tube by considering only direct contact condensation (DCC) and neglect wall condensation. Therefore, the inlet of the pipe is considered to be annular flow. In this step, AIAD model is used in order to detect the interface. The second step is the extension of the model to consider wall condensation as well which is closer to the reality. In this step, the inlet is pure steam, and due to the wall condensation, a liquid film occurs near the wall which leads to annular flow. The last step will be modeling of different morphologies which are occurring inside the tube during the condensation via using GENTOP model. By using GENTOP, the dispersed phase is able to be considered and simulated. Finally, the results of the simulations will be validated by experimental data which will be available also in HZDR.

Keywords: wall condensation, direct contact condensation, AIAD model, morphology detection

Procedia PDF Downloads 278
21670 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection

Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay

Abstract:

With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.

Keywords: Credit Card Fraud Detection, User Authentication, Behavioral Biometrics, Machine Learning, Literature Survey

Procedia PDF Downloads 94
21669 Probing Language Models for Multiple Linguistic Information

Authors: Bowen Ding, Yihao Kuang

Abstract:

In recent years, large-scale pre-trained language models have achieved state-of-the-art performance on a variety of natural language processing tasks. The word vectors produced by these language models can be viewed as dense encoded presentations of natural language that in text form. However, it is unknown how much linguistic information is encoded and how. In this paper, we construct several corresponding probing tasks for multiple linguistic information to clarify the encoding capabilities of different language models and performed a visual display. We firstly obtain word presentations in vector form from different language models, including BERT, ELMo, RoBERTa and GPT. Classifiers with a small scale of parameters and unsupervised tasks are then applied on these word vectors to discriminate their capability to encode corresponding linguistic information. The constructed probe tasks contain both semantic and syntactic aspects. The semantic aspect includes the ability of the model to understand semantic entities such as numbers, time, and characters, and the grammatical aspect includes the ability of the language model to understand grammatical structures such as dependency relationships and reference relationships. We also compare encoding capabilities of different layers in the same language model to infer how linguistic information is encoded in the model.

Keywords: language models, probing task, text presentation, linguistic information

Procedia PDF Downloads 84
21668 Radiation Emission from Ultra-Relativistic Plasma Electrons in Short-Pulse Laser Light Interactions

Authors: R. Ondarza-Rovira, T. J. M. Boyd

Abstract:

Intense femtosecond laser light incident on over-critical density plasmas has shown to emit a prolific number of high-order harmonics of the driver frequency, with spectra characterized by power-law decays Pm ~ m-p, where m denotes the harmonic order and p the spectral decay index. When the laser pulse is p-polarized, plasma effects do modify the harmonic spectrum, weakening the so-called universal decay with p=8/3 to p=5/3, or below. In this work, appeal is made to a single particle radiation model in support of the predictions from particle-in-cell (PIC) simulations. Using this numerical technique we further show that the emission radiated by electrons -that are relativistically accelerated by the laser field inside the plasma, after being expelled into vacuum, the so-called Brunel electrons is characterized not only by the plasma line but also by ultraviolet harmonic orders described by the 5/3 decay index. Results obtained from these simulations suggest that for ultra-relativistic light intensities, the spectral decay index is further reduced, with p now in the range 2/3 ≤ p ≤ 4/3. This reduction is indicative of a transition from the regime where Brunel-induced plasma radiation influences the spectrum to one dominated by bremsstrahlung emission from the Brunel electrons.

Keywords: ultra-relativistic, laser-plasma interactions, high-order harmonic emission, radiation, spectrum

Procedia PDF Downloads 452
21667 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 435
21666 Data-Driven Analysis of Velocity Gradient Dynamics Using Neural Network

Authors: Nishant Parashar, Sawan S. Sinha, Balaji Srinivasan

Abstract:

We perform an investigation of the unclosed terms in the evolution equation of the velocity gradient tensor (VGT) in compressible decaying turbulent flow. Velocity gradients in a compressible turbulent flow field influence several important nonlinear turbulent processes like cascading and intermittency. In an attempt to understand the dynamics of the velocity gradients various researchers have tried to model the unclosed terms in the evolution equation of the VGT. The existing models proposed for these unclosed terms have limited applicability. This is mainly attributable to the complex structure of the higher order gradient terms appearing in the evolution equation of VGT. We investigate these higher order gradients using the data from direct numerical simulation (DNS) of compressible decaying isotropic turbulent flow. The gas kinetic method aided with weighted essentially non-oscillatory scheme (WENO) based flow- reconstruction is employed to generate DNS data. By applying neural-network to the DNS data, we map the structure of the unclosed higher order gradient terms in the evolution of the equation of the VGT with VGT itself. We validate our findings by performing alignment based study of the unclosed higher order gradient terms obtained using the neural network with the strain rate eigenvectors.

Keywords: compressible turbulence, neural network, velocity gradient tensor, direct numerical simulation

Procedia PDF Downloads 151
21665 Removal of Heavy Metal from Wastewater using Bio-Adsorbent

Authors: Rakesh Namdeti

Abstract:

The liquid waste-wastewater- is essentially the water supply of the community after it has been used in a variety of applications. In recent years, heavy metal concentrations, besides other pollutants, have increased to reach dangerous levels for the living environment in many regions. Among the heavy metals, Lead has the most damaging effects on human health. It can enter the human body through the uptake of food (65%), water (20%), and air (15%). In this background, certain low-cost and easily available biosorbent was used and reported in this study. The scope of the present study is to remove Lead from its aqueous solution using Olea EuropaeaResin as biosorbent. The results showed that the biosorption capacity of Olea EuropaeaResin biosorbent was more for Lead removal. The Langmuir, Freundlich, Tempkin, and Dubinin-Radushkevich (D-R) models were used to describe the biosorption equilibrium of Lead Olea EuropaeaResin biosorbent, and the biosorption followed the Langmuir isotherm. The kinetic models showed that the pseudo-second-order rate expression was found to represent well the biosorption data for the biosorbent.

Keywords: novel biosorbent, central composite design, Lead, isotherms, kinetics

Procedia PDF Downloads 50
21664 Analysis of Moving Loads on Bridges Using Surrogate Models

Authors: Susmita Panda, Arnab Banerjee, Ajinkya Baxy, Bappaditya Manna

Abstract:

The design of short to medium-span high-speed bridges in critical locations is an essential aspect of vehicle-bridge interaction. Due to dynamic interaction between moving load and bridge, mathematical models or finite element modeling computations become time-consuming. Thus, to reduce the computational effort, a universal approximator using an artificial neural network (ANN) has been used to evaluate the dynamic response of the bridge. The data set generation and training of surrogate models have been conducted over the results obtained from mathematical modeling. Further, the robustness of the surrogate model has been investigated, which showed an error percentage of less than 10% with conventional methods. Additionally, the dependency of the dynamic response of the bridge on various load and bridge parameters has been highlighted through a parametric study.

Keywords: artificial neural network, mode superposition method, moving load analysis, surrogate models

Procedia PDF Downloads 84
21663 Structural Assessment of Low-Rise Reinforced Concrete Frames under Tsunami Loads

Authors: Hussain Jiffry, Kypros Pilakoutas, Reyes Garcia Lopez

Abstract:

This study examines the effect of tsunami loads on reinforced concrete (RC) frame buildings analytically. The impact of tsunami wave loads and waterborne objects are analyzed using a typical substandard full-scale two-story RC frame building tested as part of the EU-funded Ecoleader project. The building was subjected to shake table tests in bare condition and subsequently strengthened using Carbon Fiber Reinforced Polymers (CFRP) composites and retested. Numerical models of the building in both bare and CFRP-strengthened conditions are calibrated in DRAIN-3DX software to match the test results. To investigate the response of wave loads and impact forces, the numerical models are subjected to nonlinear dynamic analyses using force-time history input records. The analytical results are compared in terms of displacements at the floors and the 'impact point' of a boat. The results show that the roof displacement of the CFRP-strengthened building reduced by 63% when compared to the bare building. The results also indicate that strengthening only the mid-height of the impact column using CFRP is more efficient at reducing damage when compared to strengthening other parts of the column. Alternative solutions to mitigate damage due to tsunami loads are suggested.

Keywords: tsunami loads, hydrodynamic load, impact load, waterborne objects, RC buildings

Procedia PDF Downloads 443
21662 Using Confirmatory Factor Analysis to Test the Dimensional Structure of Tourism Service Quality

Authors: Ibrahim A. Elshaer, Alaa M. Shaker

Abstract:

Several previous empirical studies have operationalized service quality as either a multidimensional or unidimensional construct. While few earlier studies investigated some practices of the assumed dimensional structure of service quality, no study has been found to have tested the construct’s dimensionality using confirmatory factor analysis (CFA). To gain a better insight into the dimensional structure of service quality construct, this paper tests its dimensionality using three CFA models (higher order factor model, oblique factor model, and one factor model) on a set of data collected from 390 British tourists visited Egypt. The results of the three tests models indicate that service quality construct is multidimensional. This result helps resolving the problems that might arise from the lack of clarity concerning the dimensional structure of service quality, as without testing the dimensional structure of a measure, researchers cannot assume that the significant correlation is a result of factors measuring the same construct.

Keywords: service quality, dimensionality, confirmatory factor analysis, Egypt

Procedia PDF Downloads 571