Search results for: seismic robustness
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1419

Search results for: seismic robustness

129 Physical Planning Trajectories for Disaster Mitigation and Preparedness in Costal and Seismic Regions: Capital Region of Andhra Pradesh, Vijayawada in India

Authors: Timma Reddy, Srikonda Ramesh

Abstract:

India has been traditionally vulnerable to natural disasters such as Floods, droughts, cyclones, earthquakes and landslides. It has become a recurrent phenomenon as observed in last five decades. The survey indicates that about 60% of the landmass is prone to earthquakes of various intensities; over 40 million hectares is prone to floods; about 8% of the total area is prone to cyclones and 68% of the area is susceptible to drought. Climate change is likely to be perceived through experience of extreme weather events. There is growing societal concern about climate change, given the potential impacts of associated natural hazards such as cyclones, flooding, earthquakes, landslides etc, hence it is essential and crucial to strengthening our settlements to respond to such calamities. So, the research paper focus is to analyze the effective planning strategy/mechanism to integrate disaster mitigation measures in coastal regions in general and Capital Region of Andhra Pradesh in particular. The basic hypothesis is to govern the appropriate special planning considerations would facilitate to have organized way of protective life and properties from natural disasters. And further to integrate the infrastructure planning with conscious direction would provide an effective mitigations measures. It has been planned and analyzed to Vijayawada city with conscious land use planning with reference to space syntax trajectory in accordance to required social infrastructure such as health facilities, institution areas and recreational and other open spaces. It has been identified that the geographically ideal location with reference to the population densities based on GIS tools the properness strategies can be effectively integrated to protect the life and to save the properties by means of reducing the damage/impact of natural disasters in general earth quake/cyclones or floods in particularly.

Keywords: modular, trajectories, social infrastructure, evidence based syntax, drills and equipments, GIS, geographical micro zoning, high resolution satellite image

Procedia PDF Downloads 192
128 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System

Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu

Abstract:

Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.

Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model

Procedia PDF Downloads 82
127 Dynamic Response around Inclusions in Infinitely Inhomogeneous Media

Authors: Jinlai Bian, Zailin Yang, Guanxixi Jiang, Xinzhu Li

Abstract:

The problem of elastic wave propagation in inhomogeneous medium has always been a classic problem. Due to the frequent occurrence of earthquakes, many economic losses and casualties have been caused, therefore, to prevent earthquake damage to people and reduce damage, this paper studies the dynamic response around the circular inclusion in the whole space with inhomogeneous modulus, the inhomogeneity of the medium is reflected in the shear modulus of the medium with the spatial position, and the density is constant, this method can be used to solve the problem of the underground buried pipeline. Stress concentration phenomena are common in aerospace and earthquake engineering, and the dynamic stress concentration factor (DSCF) is one of the main factors leading to material damage, one of the important applications of the theory of elastic dynamics is to determine the stress concentration in the body with discontinuities such as cracks, holes, and inclusions. At present, the methods include wave function expansion method, integral transformation method, integral equation method and so on. Based on the complex function method, the Helmholtz equation with variable coefficients is standardized by using conformal transformation method and wave function expansion method, the displacement and stress fields in the whole space with circular inclusions are solved in the complex coordinate system, the unknown coefficients are solved by using boundary conditions, by comparing with the existing results, the correctness of this method is verified, based on the superiority of the complex variable function theory to the conformal transformation, this method can be extended to study the inclusion problem of arbitrary shapes. By solving the dynamic stress concentration factor around the inclusions, the influence of the inhomogeneous parameters of the medium and the wavenumber ratio of the inclusions to the matrix on the dynamic stress concentration factor is analyzed. The research results can provide some reference value for the evaluation of nondestructive testing (NDT), oil exploration, seismic monitoring, and soil-structure interaction.

Keywords: circular inclusions, complex variable function, dynamic stress concentration factor (DSCF), inhomogeneous medium

Procedia PDF Downloads 113
126 Variable Renewable Energy Droughts in the Power Sector – A Model-based Analysis and Implications in the European Context

Authors: Martin Kittel, Alexander Roth

Abstract:

The continuous integration of variable renewable energy sources (VRE) in the power sector is required for decarbonizing the European economy. Power sectors become increasingly exposed to weather variability, as the availability of VRE, i.e., mainly wind and solar photovoltaic, is not persistent. Extreme events, e.g., long-lasting periods of scarce VRE availability (‘VRE droughts’), challenge the reliability of supply. Properly accounting for the severity of VRE droughts is crucial for designing a resilient renewable European power sector. Energy system modeling is used to identify such a design. Our analysis reveals the sensitivity of the optimal design of the European power sector towards VRE droughts. We analyze how VRE droughts impact optimal power sector investments, especially in generation and flexibility capacity. We draw upon work that systematically identifies VRE drought patterns in Europe in terms of frequency, duration, and seasonality, as well as the cross-regional and cross-technological correlation of most extreme drought periods. Based on their analysis, the authors provide a selection of relevant historical weather years representing different grades of VRE drought severity. These weather years will serve as input for the capacity expansion model for the European power sector used in this analysis (DIETER). We additionally conduct robustness checks varying policy-relevant assumptions on capacity expansion limits, interconnections, and level of sector coupling. Preliminary results illustrate how an imprudent selection of weather years may cause underestimating the severity of VRE droughts, flawing modeling insights concerning the need for flexibility. Sub-optimal European power sector designs vulnerable to extreme weather can result. Using relevant weather years that appropriately represent extreme weather events, our analysis identifies a resilient design of the European power sector. Although the scope of this work is limited to the European power sector, we are confident that our insights apply to other regions of the world with similar weather patterns. Many energy system studies still rely on one or a limited number of sometimes arbitrarily chosen weather years. We argue that the deliberate selection of relevant weather years is imperative for robust modeling results.

Keywords: energy systems, numerical optimization, variable renewable energy sources, energy drought, flexibility

Procedia PDF Downloads 45
125 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 106
124 Virtual Approach to Simulating Geotechnical Problems under Both Static and Dynamic Conditions

Authors: Varvara Roubtsova, Mohamed Chekired

Abstract:

Recent studies on the numerical simulation of geotechnical problems show the importance of considering the soil micro-structure. At this scale, soil is a discrete particle medium where the particles can interact with each other and with water flow under external forces, structure loads or natural events. This paper presents research conducted in a virtual laboratory named SiGran, developed at IREQ (Institut de recherche d’Hydro-Quebec) for the purpose of investigating a broad range of problems encountered in geotechnics. Using Discrete Element Method (DEM), SiGran simulated granular materials directly by applying Newton’s laws to each particle. The water flow was simulated by using Marker and Cell method (MAC) to solve the full form of Navier-Stokes’s equation for non-compressible viscous liquid. In this paper, examples of numerical simulation and their comparisons with real experiments have been selected to show the complexity of geotechnical research at the micro level. These examples describe transient flows into a porous medium, interaction of particles in a viscous flow, compacting of saturated and unsaturated soils and the phenomenon of liquefaction under seismic load. They also provide an opportunity to present SiGran’s capacity to compute the distribution and evolution of energy by type (particle kinetic energy, particle internal elastic energy, energy dissipated by friction or as a result of viscous interaction into flow, and so on). This work also includes the first attempts to apply micro discrete results on a macro continuum level where the Smoothed Particle Hydrodynamics (SPH) method was used to resolve the system of governing equations. The material behavior equation is based on the results of simulations carried out at a micro level. The possibility of combining three methods (DEM, MAC and SPH) is discussed.

Keywords: discrete element method, marker and cell method, numerical simulation, multi-scale simulations, smoothed particle hydrodynamics

Procedia PDF Downloads 268
123 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 150
122 Maker Education as Means for Early Entrepreneurial Education: Evaluation Results from a European Pilot Action

Authors: Elisabeth Unterfrauner, Christian Voigt

Abstract:

Since the foundation of the first Fab Lab by the Massachusetts Institute of Technology about 17 years ago, the Maker movement has spread globally with the foundation of maker spaces and Fab Labs worldwide. In these workshops, citizens have access to digital fabrication technologies such as 3D printers and laser cutters to develop and test their own ideas and prototypes, which makes it an attractive place for start-up companies. Know-How is shared not only in the physical space but also online in diverse communities. According to the Horizon report, the Maker movement, however, will also have an impact on educational settings in the following years. The European project ‘DOIT - Entrepreneurial skills for young social innovators in an open digital world’ has incorporated key elements of making to develop an early entrepreneurial education program for children between the age of six and 16. The Maker pedagogy builds on constructive learning approaches, learning by doing principles, learning in collaborative and interdisciplinary teams and learning through trial and error where mistakes are acknowledged as learning opportunities. The DOIT program consists of seven consecutive elements. It starts with a motivation phase where students get motivated by envisioning the scope of their possibilities. The second step is about Co-design: Students are asked to collect and select potential ideas for innovations. In the Co-creation phase students gather in teams and develop first prototypes of their ideas. In the iteration phase, the prototype is continuously improved and in the next step, in the reflection phase, feedback on the prototypes is exchanged between the teams. In the last two steps, scaling and reaching out, the robustness of the prototype is tested with a bigger group of users outside of the educational setting and finally students will share their projects with a wider public. The DOIT program involves 1,000 children in two pilot phases at 11 pilot sites in ten different European countries. The comprehensive evaluation design is based on a mixed method approach with a theoretical backbone on Lackeus’ model of entrepreneurship education, which distinguishes between entrepreneurial attitudes, entrepreneurial skills and entrepreneurial knowledge. A pre-post-test with quantitative measures as well as qualitative data from interviews with facilitators, students and workshop protocols will reveal the effectiveness of the program. The evaluation results will be presented at the conference.

Keywords: early entrepreneurial education, Fab Lab, maker education, Maker movement

Procedia PDF Downloads 102
121 Internal Financing Constraints and Corporate Investment: Evidence from Indian Manufacturing Firms

Authors: Gaurav Gupta, Jitendra Mahakud

Abstract:

This study focuses on the significance of internal financing constraints on the determination of corporate fixed investments in the case of Indian manufacturing companies. Financing constraints companies which have less internal fund or retained earnings face more transaction and borrowing costs due to imperfections in the capital market. The period of study is 1999-2000 to 2013-2014 and we consider 618 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test, and Hausman test results conclude the suitability of the fixed effect model for the estimation. The cash flow and liquidity of the company have been used as the proxies for the internal financial constraints. In accordance with various theories of corporate investments, we consider other firm specific variable like firm age, firm size, profitability, sales and leverage as the control variables in the model. From the econometric analysis, we find internal cash flow and liquidity have the significant and positive impact on the corporate investments. The variables like cost of capital, sales growth and growth opportunities are found to be significantly determining the corporate investments in India, which is consistent with the neoclassical, accelerator and Tobin’s q theory of corporate investment. To check the robustness of results, we divided the sample on the basis of cash flow and liquidity. Firms having cash flow greater than zero are put under one group, and firms with cash flow less than zero are put under another group. Also, the firms are divided on the basis of liquidity following the same approach. We find that the results are robust to both types of companies having positive and negative cash flow and liquidity. The results for other variables are also in the same line as we find for the whole sample. These findings confirm that internal financing constraints play a significant role for determination of corporate investment in India. The findings of this study have the implications for the corporate managers to focus on the projects having higher expected cash inflows to avoid the financing constraints. Apart from that, they should also maintain adequate liquidity to minimize the external financing costs.

Keywords: cash flow, corporate investment, financing constraints, panel data method

Procedia PDF Downloads 214
120 Outwrestling Cataclysmic Tsunamis at Hilo, Hawaii: Using Technical Developments of the past 50 Years to Improve Performance

Authors: Mark White

Abstract:

The best practices for owners and urban planners to manage tsunami risk have evolved during the last fifty years, and related technical advances have created opportunities for them to obtain better performance than in earlier cataclysmic tsunami inundations. This basic pattern is illustrated at Hilo Bay, the waterfront area of Hilo, Hawaii, an urban seaport which faces the most severe tsunami hazard of the Hawaiian archipelago. Since April 1, 1946, Hilo Bay has endured tsunami waves with a maximum water height exceeding 2.5 meters following four severe earthquakes: Unimak Island (Mw 8.6, 6.1 m) in 1946; Valdiva (Mw 9.5, the largest earthquake of the 20th century, 10.6 m) in 1960; William Prince Sound (Mw 9.2, 3.8 m) in 1964; and Kalapana (Mw 7.7, the largest earthquake in Hawaii since 1868, 2.6 m) in 1975. Ignoring numerous smaller tsunamis during the same time frame, these four cataclysmic tsunamis have caused property losses in Hilo to exceed $1.25 billion and more than 150 deaths. It is reasonable to foresee another cataclysmic tsunami inundating the urban core of Hilo in the next 50 years, which, if unchecked, could cause additional deaths and losses in the hundreds of millions of dollars. Urban planners and individual owners are now in a position to reduce these losses in the next foreseeable tsunami that generates maximum water heights between 2.5 and 10 meters in Hilo Bay. Since 1946, Hilo planners and individual owners have already created buffer zones between the shoreline and its historic downtown area. As these stakeholders make inevitable improvements to the built environment along and adjacent to the shoreline, they should incorporate new methods for better managing the obvious tsunami risk at Hilo. At the planning level, new manmade land forms, such as tsunami parks and inundation reservoirs, should be developed. Individual owners should require their design professionals to include sacrificial seismic and tsunami fuses that will perform well in foreseeable severe events and that can be easily repaired in the immediate aftermath. These investments before the next cataclysmic tsunami at Hilo will yield substantial reductions in property losses and fatalities.

Keywords: hilo, tsunami parks, reservoirs, fuse systems, risk managment

Procedia PDF Downloads 138
119 Defining Unconventional Hydrocarbon Parameter Using Shale Play Concept

Authors: Rudi Ryacudu, Edi Artono, Gema Wahyudi Purnama

Abstract:

Oil and gas consumption in Indonesia is currently on the rise due to its nation economic improvement. Unfortunately, Indonesia’s domestic oil production cannot meet it’s own consumption and Indonesia has lost its status as Oil and Gas exporter. Even worse, our conventional oil and gas reserve is declining. Unwilling to give up, the government of Indonesia has taken measures to invite investors to invest in domestic oil and gas exploration to find new potential reserve and ultimately increase production. Yet, it has not bear any fruit. Indonesia has taken steps now to explore new unconventional oil and gas play including Shale Gas, Shale Oil and Tight Sands to increase domestic production. These new plays require definite parameters to differentiate each concept. The purpose of this paper is to provide ways in defining unconventional hydrocarbon reservoir parameters in Shale Gas, Shale Oil and Tight Sands. The parameters would serve as an initial baseline for users to perform analysis of unconventional hydrocarbon plays. Some of the on going concerns or question to be answered in regards to unconventional hydrocarbon plays includes: 1. The TOC number, 2. Has it been well “cooked” and become a hydrocarbon, 3. What are the permeability and the porosity values, 4. Does it need a stimulation, 5. Does it has pores, and 6. Does it have sufficient thickness. In contrast with the common oil and gas conventional play, Shale Play assumes that hydrocarbon is retained and trapped in area with very low permeability. In most places in Indonesia, hydrocarbon migrates from source rock to reservoir. From this case, we could derive a theory that Kitchen and Source Rock are located right below the reservoir. It is the starting point for user or engineer to construct basin definition in relation with the tectonic play and depositional environment. Shale Play concept requires definition of characteristic, description and reservoir identification to discover reservoir that is technically and economically possible to develop. These are the steps users and engineers has to do to perform Shale Play: a. Calculate TOC and perform mineralogy analysis using water saturation and porosity value. b. Reconstruct basin that accumulate hydrocarbon c. Brittlenes Index calculated form petrophysical and distributed based on seismic multi attributes d. Integrated natural fracture analysis e. Best location to place a well.

Keywords: unconventional hydrocarbon, shale gas, shale oil tight sand reservoir parameters, shale play

Procedia PDF Downloads 377
118 Influence of Strike-Slip Faulting in the Tectonic Evolution of North-Eastern Tunisia

Authors: Aymen Arfaoui, Abdelkader Soumaya, Ali Kadri, Noureddine Ben Ayed

Abstract:

The major contractional events characterized by strike-slip faulting, folding, and thrusting occurred in the Eocene, Late Miocene, and Quaternary along with the NE Tunisian domain between Bou Kornine-Ressas- Msella and Cap Bon Peninsula. During the Plio-Quaternary, the Grombalia and Mornag grabens show a maximum of collapse in parallelism with the NNW-SSE SHmax direction and developed as 3rd order extensive regions within a regional compressional regime. Using available tectonic and geophysical data supplemented by new fault-kinematic observations, we show that Cenozoic deformations are dominated by first order N-S faults reactivation, this sinistral wrench system is responsible for the formation of strike-slip duplexes, thrusts, folds, and grabens. Based on our new structural interpretation, the major faults of N-S Axis, Bou Kornine-Ressas-Messella (MRB), and Hammamet-Korbous (HK) form an N-S first order restraining stepover within a left-lateral strike-slip duplex. The N-S master MRB fault is dominated by contractional imbricate fans, while the parallel HK fault is characterized by a trailing of extensional imbricate fans. The Eocene and Miocene compression phases in the study area caused sinistral strike-slip reactivation of pre-existing N-S faults, reverse reactivation of NE-SW trending faults, and normal-oblique reactivation of NW-SE faults, creating a NE-SW to N-S trending system of east-verging folds and overlaps. Seismic tomography images reveal a key role for the lithospheric subvertical tear or STEP fault (Slab Transfer Edge Propagator) evidenced below this region on the development of the MRB and the HK relay zone. The presence of extensive syntectonic Pliocene sequences above this crustal scale fault may be the result of a recent lithospheric vertical motion of this STEP fault due to the rollback and lateral migration of the Calabrian slab eastward.

Keywords: Tunisia, strike-slip fault, contractional duplex, tectonic stress, restraining stepover, STEP fault

Procedia PDF Downloads 102
117 Energy Content and Spectral Energy Representation of Wave Propagation in a Granular Chain

Authors: Rohit Shrivastava, Stefan Luding

Abstract:

A mechanical wave is propagation of vibration with transfer of energy and momentum. Studying the energy as well as spectral energy characteristics of a propagating wave through disordered granular media can assist in understanding the overall properties of wave propagation through inhomogeneous materials like soil. The study of these properties is aimed at modeling wave propagation for oil, mineral or gas exploration (seismic prospecting) or non-destructive testing for the study of internal structure of solids. The study of Energy content (Kinetic, Potential and Total Energy) of a pulse propagating through an idealized one-dimensional discrete particle system like a mass disordered granular chain can assist in understanding the energy attenuation due to disorder as a function of propagation distance. The spectral analysis of the energy signal can assist in understanding dispersion as well as attenuation due to scattering in different frequencies (scattering attenuation). The selection of one-dimensional granular chain also helps in studying only the P-wave attributes of the wave and removing the influence of shear or rotational waves. Granular chains with different mass distributions have been studied, by randomly selecting masses from normal, binary and uniform distributions and the standard deviation of the distribution is considered as the disorder parameter, higher standard deviation means higher disorder and lower standard deviation means lower disorder. For obtaining macroscopic/continuum properties, ensemble averaging has been used. Interpreting information from a Total Energy signal turned out to be much easier in comparison to displacement, velocity or acceleration signals of the wave, hence, indicating a better analysis method for wave propagation through granular materials. Increasing disorder leads to faster attenuation of the signal and decreases the Energy of higher frequency signals transmitted, but at the same time the energy of spatially localized high frequencies also increases. An ordered granular chain exhibits ballistic propagation of energy whereas, a disordered granular chain exhibits diffusive like propagation, which eventually becomes localized at long periods of time.

Keywords: discrete elements, energy attenuation, mass disorder, granular chain, spectral energy, wave propagation

Procedia PDF Downloads 261
116 Numerical Simulation of Encased Composite Column Bases Subjected to Cyclic Loading

Authors: Eman Ismail, Adnan Masri

Abstract:

Energy dissipation in ductile moment frames occurs mainly through plastic hinge rotations in its members (beams and columns). Generally, plastic hinge locations are pre-determined and limited to the beam ends, where columns are designed to remain elastic in order to avoid premature instability (aka story mechanisms) with the exception of column bases, where a base is 'fixed' in order to provide higher stiffness and stability and to form a plastic hinge. Plastic hinging at steel column bases in ductile moment frames using conventional base connection details is accompanied by several complications (thicker and heavily stiffened connections, larger embedment depths, thicker foundation to accommodate anchor rod embedment, etc.). An encased composite base connection is proposed where a segment of the column beginning at the base up to a certain point along its height is encased in reinforced concrete with headed shear studs welded to the column flanges used to connect the column to the concrete encasement. When the connection is flexurally loaded, stresses are transferred to a reinforced concrete encasement through the headed shear studs, and thereby transferred to the foundation by reinforced concrete mechanics, and axial column forces are transferred through the base-plate assembly. Horizontal base reactions are expected to be transferred by the direct bearing of the outer and inner faces of the flanges; however, investigation of this mechanism is not within the scope of this research. The inelastic and cyclic behavior of the connection will be investigated where it will be subjected to reversed cyclic loading, and rotational ductility will be observed in cases of yielding mechanisms where yielding occurs as flexural yielding in the beam-column, shear yielding in headed studs, and flexural yielding of the reinforced concrete encasement. The findings of this research show that the connection is capable of achieving satisfactory levels of ductility in certain conditions given proper detailing and proportioning of elements.

Keywords: seismic design, plastic mechanisms steel structure, moment frame, composite construction

Procedia PDF Downloads 103
115 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 462
114 Part Variation Simulations: An Industrial Case Study with an Experimental Validation

Authors: Narendra Akhadkar, Silvestre Cano, Christophe Gourru

Abstract:

Injection-molded parts are widely used in power system protection products. One of the biggest challenges in an injection molding process is shrinkage and warpage of the molded parts. All these geometrical variations may have an adverse effect on the quality of the product, functionality, cost, and time-to-market. The situation becomes more challenging in the case of intricate shapes and in mass production using multi-cavity tools. To control the effects of shrinkage and warpage, it is very important to correctly find out the input parameters that could affect the product performance. With the advances in the computer-aided engineering (CAE), different tools are available to simulate the injection molding process. For our case study, we used the MoldFlow insight tool. Our aim is to predict the spread of the functional dimensions and geometrical variations on the part due to variations in the input parameters such as material viscosity, packing pressure, mold temperature, melt temperature, and injection speed. The input parameters may vary during batch production or due to variations in the machine process settings. To perform the accurate product assembly variation simulation, the first step is to perform an individual part variation simulation to render realistic tolerance ranges. In this article, we present a method to simulate part variations coming from the input parameters variation during batch production. The method is based on computer simulations and experimental validation using the full factorial design of experiments (DoE). The robustness of the simulation model is verified through input parameter wise sensitivity analysis study performed using simulations and experiments; all the results show a very good correlation in the material flow direction. There exists a non-linear interaction between material and the input process variables. It is observed that the parameters such as packing pressure, material, and mold temperature play an important role in spread on functional dimensions and geometrical variations. This method will allow us in the future to develop accurate/realistic virtual prototypes based on trusted simulated process variation and, therefore, increase the product quality and potentially decrease the time to market.

Keywords: correlation, molding process, tolerance, sensitivity analysis, variation simulation

Procedia PDF Downloads 152
113 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 63
112 Data, Digital Identity and Antitrust Law: An Exploratory Study of Facebook’s Novi Digital Wallet

Authors: Wanjiku Karanja

Abstract:

Facebook has monopoly power in the social networking market. It has grown and entrenched its monopoly power through the capture of its users’ data value chains. However, antitrust law’s consumer welfare roots have prevented it from effectively addressing the role of data capture in Facebook’s market dominance. These regulatory blind spots are augmented in Facebook’s proposed Diem cryptocurrency project and its Novi Digital wallet. Novi, which is Diem’s digital identity component, shall enable Facebook to collect an unprecedented volume of consumer data. Consequently, Novi has seismic implications on internet identity as the network effects of Facebook’s large user base could establish it as the de facto internet identity layer. Moreover, the large tracts of data Facebook shall collect through Novi shall further entrench Facebook's market power. As such, the attendant lock-in effects of this project shall be very difficult to reverse. Urgent regulatory action is therefore required to prevent this expansion of Facebook’s data resources and monopoly power. This research thus highlights the importance of data capture to competition and market health in the social networking industry. It utilizes interviews with key experts to empirically interrogate the impact of Facebook’s data capture and control of its users’ data value chains on its market power. This inquiry is contextualized against Novi’s expansive effect on Facebook’s data value chains. It thus addresses the novel antitrust issues arising at the nexus of Facebook’s monopoly power and the privacy of its users’ data. It also explores the impact of platform design principles, specifically data portability and data portability, in mitigating Facebook’s anti-competitive practices. As such, this study finds that Facebook is a powerful monopoly that dominates the social media industry to the detriment of potential competitors. Facebook derives its power from its size, annexure of the consumer data value chain, and control of its users’ social graphs. Additionally, the platform design principles of data interoperability and data portability are not a panacea to restoring competition in the social networking market. Their success depends on the establishment of robust technical standards and regulatory frameworks.

Keywords: antitrust law, data protection law, data portability, data interoperability, digital identity, Facebook

Procedia PDF Downloads 98
111 Parametric Non-Linear Analysis of Reinforced Concrete Frames with Supplemental Damping Systems

Authors: Daniele Losanno, Giorgio Serino

Abstract:

This paper focuses on parametric analysis of reinforced concrete structures equipped with supplemental damping braces. Practitioners still luck sufficient data for current design of damper added structures and often reduce the real model to a pure damper braced structure even if this assumption is neither realistic nor conservative. In the present study, the damping brace is modelled as made by a linear supporting brace connected in series with the viscous/hysteretic damper. Deformation capacity of existing structures is usually not adequate to undergo the design earthquake. In spite of this, additional dampers could be introduced strongly limiting structural damage to acceptable values, or in some cases, reducing frame response to elastic behavior. This work is aimed at providing useful considerations for retrofit of existing buildings by means of supplemental damping braces. The study explicitly takes into consideration variability of (a) relative frame to supporting brace stiffness, (b) dampers’ coefficient (viscous coefficient or yielding force) and (c) non-linear frame behavior. Non-linear time history analysis has been run to account for both dampers’ behavior and non-linear plastic hinges modelled by Pivot hysteretic type. Parametric analysis based on previous studies on SDOF or MDOF linear frames provide reference values for nearly optimal damping systems design. With respect to bare frame configuration, seismic response of the damper-added frame is strongly improved, limiting deformations to acceptable values far below ultimate capacity. Results of the analysis also demonstrated the beneficial effect of stiffer supporting braces, thus highlighting inadequacy of simplified pure damper models. At the same time, the effect of variable damping coefficient and yielding force has to be treated as an optimization problem.

Keywords: brace stiffness, dissipative braces, non-linear analysis, plastic hinges, reinforced concrete frames

Procedia PDF Downloads 263
110 An Examination of Earnings Management by Publicly Listed Targets Ahead of Mergers and Acquisitions

Authors: T. Elrazaz

Abstract:

This paper examines accrual and real earnings management by publicly listed targets around mergers and acquisitions. Prior literature shows that earnings management around mergers and acquisitions can have a significant economic impact because of the associated wealth transfers among stakeholders. More importantly, acting on behalf of their shareholders or pursuing their self-interests, managers of both targets and acquirers may be equally motivated to manipulate earnings prior to an acquisition to generate higher gains for their shareholders or themselves. Building on the grounds of information asymmetry, agency conflicts, stewardship theory, and the revelation principle, this study addresses the question of whether takeover targets employ accrual and real earnings management in the periods prior to the announcement of Mergers and Acquisitions (M&A). Additionally, this study examines whether acquirers are able to detect targets’ earnings management, and in response, adjust the acquisition premium paid in order not to face the risk of overpayment. This study uses an aggregate accruals approach in estimating accrual earnings management as proxied by estimated abnormal accruals. Additionally, real earnings management is proxied for by employing widely used models in accounting and finance literature. The results of this study indicate that takeover targets manipulate their earnings using accruals in the second year with an earnings release prior to the announcement of the M&A. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals. These results are consistent with the argument that targets of cash-only or mixed-payment deals do not have the same strong motivations to manage their earnings as their stock-financed deals counterparts do additionally supporting the findings of prior studies that the method of payment in takeovers is value relevant. The findings of this study also indicate that takeover targets manipulate earnings upwards through cutting discretionary expenses the year prior to the acquisition while they do not do so by manipulating sales or production costs. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals, providing further robustness to the results derived under the accrual-based models. Finally, this study finds evidence suggesting that acquirers are fully aware of the accrual-based techniques employed by takeover targets and can unveil such manipulation practices. These results are robust to alternative accrual and real earnings management proxies, as well as controlling for the method of payment in the deal.

Keywords: accrual earnings management, acquisition premium, real earnings management, takeover targets

Procedia PDF Downloads 91
109 Diagrid Structural System

Authors: K. Raghu, Sree Harsha

Abstract:

The interrelationship between the technology and architecture of tall buildings is investigated from the emergence of tall buildings in late 19th century to the present. In the late 19th century early designs of tall buildings recognized the effectiveness of diagonal bracing members in resisting lateral forces. Most of the structural systems deployed for early tall buildings were steel frames with diagonal bracings of various configurations such as X, K, and eccentric. Though the historical research a filtering concept is developed original and remedial technology- through which one can clearly understand inter-relationship between the technical evolution and architectural esthetic and further stylistic transition buildings. Diagonalized grid structures – “diagrids” - have emerged as one of the most innovative and adaptable approaches to structuring buildings in this millennium. Variations of the diagrid system have evolved to the point of making its use non-exclusive to the tall building. Diagrid construction is also to be found in a range of innovative mid-rise steel projects. Contemporary design practice of tall buildings is reviewed and design guidelines are provided for new design trends. Investigated in depths are the behavioral characteristics and design methodology for diagrids structures, which emerge as a new direction in the design of tall buildings with their powerful structural rationale and symbolic architectural expression. Moreover, new technologies for tall building structures and facades are developed for performance enhancement through design integration, and their architectural potentials are explored. By considering the above data the analysis and design of 40-100 storey diagrids steel buildings is carried out using E-TABS software with diagrids of various angle to be found for entire building which will be helpful to reduce the steel requirement for the structure. The present project will have to undertake wind analysis, seismic analysis for lateral loads acting on the structure due to wind loads, earthquake loads, gravity loads. All structural members are designed as per IS 800-2007 considering all load combination. Comparison of results in terms of time period, top storey displacement and inter-storey drift to be carried out. The secondary effect like temperature variations are not considered in the design assuming small variation.

Keywords: diagrid, bracings, structural, building

Procedia PDF Downloads 352
108 Design and Construction Demeanor of a Very High Embankment Using Geosynthetics

Authors: Mariya Dayana, Budhmal Jain

Abstract:

Kannur International Airport Ltd. (KIAL) is a new Greenfield airport project with airside development on an undulating terrain with an average height of 90m above Mean Sea Level (MSL) and a maximum height of 142m. To accommodate the desired Runway length and Runway End Safety Area (RESA) at both the ends along the proposed alignment, it resulted in 45.5 million cubic meters in cutting and filling. The insufficient availability of land for the construction of free slope embankment at RESA 07 end resulted in the design and construction of Reinforced Soil Slope (RSS) with a maximum slope of 65 degrees. An embankment fill of average 70m height with steep slopes located in high rainfall area is a unique feature of this project. The design and construction was challenging being asymmetrical with curves and bends. The fill was reinforced with high strength Uniaxial geogrids laid perpendicular to the slope. Weld mesh wrapped with coir mat acted as the facia units to protect it against surface failure. Face anchorage were also provided by wrapping the geogrids along the facia units where the slope angle was steeper than 45 degrees. Considering high rainfall received on this table top airport site, extensive drainage system was designed for the high embankment fill. Gabion wall up to 10m height were also designed and constructed along the boundary to accommodate the toe of the RSS fill beside the jeepable track at the base level. The design of RSS fill was done using ReSSA software and verified in PLAXIS 2D modeling. Both slip surface failure and wedge failure cases were considered in static and seismic analysis for local and global failure cases. The site won excavated laterite soil was used as the fill material for the construction. Extensive field and laboratory tests were conducted during the construction of RSS system for quality assurance. This paper represents a case study detailing the design and construction of a very high embankment using geosynthetics for the provision of Runway length and RESA area.

Keywords: airport, embankment, gabion, high strength uniaxial geogrid, kial, laterite soil, plaxis 2d

Procedia PDF Downloads 139
107 Estimating CO₂ Storage Capacity under Geological Uncertainty Using 3D Geological Modeling of Unconventional Reservoir Rocks in Block nv32, Shenvsi Oilfield, China

Authors: Ayman Mutahar Alrassas, Shaoran Ren, Renyuan Ren, Hung Vo Thanh, Mohammed Hail Hakimi, Zhenliang Guan

Abstract:

The significant effect of CO₂ on global climate and the environment has gained more concern worldwide. Enhance oil recovery (EOR) associated with sequestration of CO₂ particularly into the depleted oil reservoir is considered the viable approach under financial limitations since it improves the oil recovery from the existing oil reservoir and boosts the relation between global-scale of CO₂ capture and geological sequestration. Consequently, practical measurements are required to attain large-scale CO₂ emission reduction. This paper presents an integrated modeling workflow to construct an accurate 3D reservoir geological model to estimate the storage capacity of CO₂ under geological uncertainty in an unconventional oil reservoir of the Paleogene Shahejie Formation (Es1) in the block Nv32, Shenvsi oilfield, China. In this regard, geophysical data, including well logs of twenty-two well locations and seismic data, were combined with geological and engineering data and used to construct a 3D reservoir geological modeling. The geological modeling focused on four tight reservoir units of the Shahejie Formation (Es1-x1, Es1-x2, Es1-x3, and Es1-x4). The validated 3D reservoir models were subsequently used to calculate the theoretical CO₂ storage capacity in the block Nv32, Shenvsi oilfield. Well logs were utilized to predict petrophysical properties such as porosity and permeability, and lithofacies and indicate that the Es1 reservoir units are mainly sandstone, shale, and limestone with a proportion of 38.09%, 32.42%, and 29.49, respectively. Well log-based petrophysical results also show that the Es1 reservoir units generally exhibit 2–36% porosity, 0.017 mD to 974.8 mD permeability, and moderate to good net to gross ratios. These estimated values of porosity, permeability, lithofacies, and net to gross were up-scaled and distributed laterally using Sequential Gaussian Simulation (SGS) and Simulation Sequential Indicator (SIS) methods to generate 3D reservoir geological models. The reservoir geological models show there are lateral heterogeneities of the reservoir properties and lithofacies, and the best reservoir rocks exist in the Es1-x4, Es1-x3, and Es1-x2 units, respectively. In addition, the reservoir volumetric of the Es1 units in block Nv32 was also estimated based on the petrophysical property models and fund to be between 0.554368

Keywords: CO₂ storage capacity, 3D geological model, geological uncertainty, unconventional oil reservoir, block Nv32

Procedia PDF Downloads 146
106 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 81
105 The Trade Flow of Small Association Agreements When Rules of Origin Are Relaxed

Authors: Esmat Kamel

Abstract:

This paper aims to shed light on the extent to which the Agadir Association agreement has fostered inter regional trade between the E.U_26 and the Agadir_4 countries; once that we control for the evolution of Agadir agreement’s exports to the rest of the world. The next valid question will be regarding any remarkable variation in the spatial/sectoral structure of exports, and to what extent has it been induced by the Agadir agreement itself and precisely after the adoption of rules of origin and the PANEURO diagonal cumulative scheme? The paper’s empirical dataset covering a timeframe from [2000 -2009] was designed to account for sector specific export and intermediate flows and the bilateral structured gravity model was custom tailored to capture sector and regime specific rules of origin and the Poisson Pseudo Maximum Likelihood Estimator was used to calculate the gravity equation. The methodological approach of this work is considered to be a threefold one which starts first by conducting a ‘Hierarchal Cluster Analysis’ to classify final export flows showing a certain degree of linkage between each other. The analysis resulted in three main sectoral clusters of exports between Agadir_4 and E.U_26: cluster 1 for Petrochemical related sectors, cluster 2 durable goods and finally cluster 3 for heavy duty machinery and spare parts sectors. Second step continues by taking export flows resulting from the 3 clusters to be subject to treatment with diagonal Rules of origin through ‘The Double Differences Approach’, versus an equally comparable untreated control group. Third step is to verify results through a robustness check applied by ‘Propensity Score Matching’ to validate that the same sectoral final export and intermediate flows increased when rules of origin were relaxed. Through all the previous analysis, a remarkable and partial significance of the interaction term combining both treatment effects and time for the coefficients of 13 out of the 17 covered sectors turned out to be partially significant and it further asserted that treatment with diagonal rules of origin contributed in increasing Agadir’s_4 final and intermediate exports to the E.U._26 on average by 335% and in changing Agadir_4 exports structure and composition to the E.U._26 countries.

Keywords: agadir association agreement, structured gravity model, hierarchal cluster analysis, double differences estimation, propensity score matching, diagonal and relaxed rules of origin

Procedia PDF Downloads 296
104 The Impact of Monetary Policy on Aggregate Market Liquidity: Evidence from Indian Stock Market

Authors: Byomakesh Debata, Jitendra Mahakud

Abstract:

The recent financial crisis has been characterized by massive monetary policy interventions by the Central bank, and it has amplified the importance of liquidity for the stability of the stock market. This paper empirically elucidates the actual impact of monetary policy interventions on stock market liquidity covering all National Stock Exchange (NSE) Stocks, which have been traded continuously from 2002 to 2015. The present study employs a multivariate VAR model along with VAR-granger causality test, impulse response functions, block exogeneity test, and variance decomposition to analyze the direction as well as the magnitude of the relationship between monetary policy and market liquidity. Our analysis posits a unidirectional relationship between monetary policy (call money rate, base money growth rate) and aggregate market liquidity (traded value, turnover ratio, Amihud illiquidity ratio, turnover price impact, high-low spread). The impulse response function analysis clearly depicts the influence of monetary policy on stock liquidity for every unit innovation in monetary policy variables. Our results suggest that an expansionary monetary policy increases aggregate stock market liquidity and the reverse is documented during the tightening of monetary policy. To ascertain whether our findings are consistent across all periods, we divided the period of study as pre-crisis (2002 to 2007) and post-crisis period (2007-2015) and ran the same set of models. Interestingly, all liquidity variables are highly significant in the post-crisis period. However, the pre-crisis period has witnessed a moderate predictability of monetary policy. To check the robustness of our results we ran the same set of VAR models with different monetary policy variables and found the similar results. Unlike previous studies, we found most of the liquidity variables are significant throughout the sample period. This reveals the predictability of monetary policy on aggregate market liquidity. This study contributes to the existing body of literature by documenting a strong predictability of monetary policy on stock liquidity in an emerging economy with an order driven market making system like India. Most of the previous studies have been carried out in developing economies with quote driven or hybrid market making system and their results are ambiguous across different periods. From an eclectic sense, this study may be considered as a baseline study to further find out the macroeconomic determinants of liquidity of stocks at individual as well as aggregate level.

Keywords: market liquidity, monetary policy, order driven market, VAR, vector autoregressive model

Procedia PDF Downloads 346
103 Stability Indicating RP – HPLC Method Development, Validation and Kinetic Study for Amiloride Hydrochloride and Furosemide in Pharmaceutical Dosage Form

Authors: Jignasha Derasari, Patel Krishna M, Modi Jignasa G.

Abstract:

Chemical stability of pharmaceutical molecules is a matter of great concern as it affects the safety and efficacy of the drug product.Stability testing data provides the basis to understand how the quality of a drug substance and drug product changes with time under the influence of various environmental factors. Besides this, it also helps in selecting proper formulation and package as well as providing proper storage conditions and shelf life, which is essential for regulatory documentation. The ICH guideline states that stress testing is intended to identify the likely degradation products which further help in determination of the intrinsic stability of the molecule and establishing degradation pathways, and to validate the stability indicating procedures. A simple, accurate and precise stability indicating RP- HPLC method was developed and validated for simultaneous estimation of Amiloride Hydrochloride and Furosemide in tablet dosage form. Separation was achieved on an Phenomenexluna ODS C18 (250 mm × 4.6 mm i.d., 5 µm particle size) by using a mobile phase consisting of Ortho phosphoric acid: Acetonitrile (50:50 %v/v) at a flow rate of 1.0 ml/min (pH 3.5 adjusted with 0.1 % TEA in Water) isocratic pump mode, Injection volume 20 µl and wavelength of detection was kept at 283 nm. Retention time for Amiloride Hydrochloride and Furosemide was 1.810 min and 4.269 min respectively. Linearity of the proposed method was obtained in the range of 40-60 µg/ml and 320-480 µg/ml and Correlation coefficient was 0.999 and 0.998 for Amiloride hydrochloride and Furosemide, respectively. Forced degradation study was carried out on combined dosage form with various stress conditions like hydrolysis (acid and base hydrolysis), oxidative and thermal conditions as per ICH guideline Q2 (R1). The RP- HPLC method has shown an adequate separation for Amiloride hydrochloride and Furosemide from its degradation products. Proposed method was validated as per ICH guidelines for specificity, linearity, accuracy; precision and robustness for estimation of Amiloride hydrochloride and Furosemide in commercially available tablet dosage form and results were found to be satisfactory and significant. The developed and validated stability indicating RP-HPLC method can be used successfully for marketed formulations. Forced degradation studies help in generating degradants in much shorter span of time, mostly a few weeks can be used to develop the stability indicating method which can be applied later for the analysis of samples generated from accelerated and long term stability studies. Further, kinetic study was also performed for different forced degradation parameters of the same combination, which help in determining order of reaction.

Keywords: amiloride hydrochloride, furosemide, kinetic study, stability indicating RP-HPLC method validation

Procedia PDF Downloads 438
102 Precursors Signatures of Few Major Earthquakes in Italy Using Very Low Frequency Signal of 45.9kHz

Authors: Keshav Prasad Kandel, Balaram Khadka, Karan Bhatta, Basu Dev Ghimire

Abstract:

Earthquakes still exist as a threating disaster. Being able to predict earthquakes will certainly help prevent substantial loss of life and property. Perhaps, Very Low Frequency/Low Frequency (VLF/LF) signal band (3-30 kHz), which is effectively reflected from D-layer of ionosphere, can be established as a tool to predict earthquake. On May 20 and May 29, 2012, earthquakes of magnitude 6.1 and 5.8 respectively struck Emilia-Romagna of Italy. A year back, on August 24, 2016, an earthquake of magnitude 6.2 struck Central Italy (42.7060 N and 13.2230 E) at 1:36 UT. We present the results obtained from the US Navy VLF Transmitter’s NSY signal of 45.9 kHz transmitted from Niscemi, in the province of Sicily, Italy and received at the Kiel Longwave Monitor, Germany for 2012 and 2016. We analyzed the terminator times, their individual differences and nighttime fluctuation counts. We also analyzed trends, dispersion and nighttime fluctuation which gave us a possible precursors to these earthquakes. Since perturbations in VLF amplitude could also be due to various other factors like lightning, geomagnetic activities (storms, auroras etc.) and solar activities (flares, UV flux, etc.), we filtered the possible perturbations due to these agents to guarantee that the perturbations seen in VLF/LF amplitudes were as a precursor to Earthquakes. As our TRGCP path is North-south, the sunrise and sunset time in transmitter and receiver places matches making pathway for VLF/LF smoother and therefore hoping to obtain more natural data. To our surprise, we found many clear anomalies (as precursors) in terminator times 5 days to 16 days before the earthquakes. Moreover, using night time fluctuation method, we found clear anomalies 5 days to 13 days prior to main earthquakes. This exactly correlates with the findings of previous authors that ionospheric perturbations are seen few days to one month before the seismic activity. In addition to this, we were amazed to observe unexpected decrease of dispersion on certain anomalies where it was supposed to increase, thereby not supporting our finding to some extent. To resolve this problem, we devised a new parameter called dispersion nighttime (dispersion). On analyzing, this parameter decreases significantly on days of nighttime anomalies thereby supporting our precursors to much extent.

Keywords: D-layer, TRGCP (Transmitter Receiver Great Circle Path), terminator times, VLF/LF

Procedia PDF Downloads 161
101 Introduction to Two Artificial Boundary Conditions for Transient Seepage Problems and Their Application in Geotechnical Engineering

Authors: Shuang Luo, Er-Xiang Song

Abstract:

Many problems in geotechnical engineering, such as foundation deformation, groundwater seepage, seismic wave propagation and geothermal transfer problems, may involve analysis in the ground which can be seen as extending to infinity. To that end, consideration has to be given regarding how to deal with the unbounded domain to be analyzed by using numerical methods, such as finite element method (FEM), finite difference method (FDM) or finite volume method (FVM). A simple artificial boundary approach derived from the analytical solutions for transient radial seepage problems, is introduced. It should be noted, however, that the analytical solutions used to derive the artificial boundary are particular solutions under certain boundary conditions, such as constant hydraulic head at the origin or constant pumping rate of the well. When dealing with unbounded domains with unsteady boundary conditions, a more sophisticated artificial boundary approach to deal with the infinity of the domain is presented. By applying Laplace transforms and introducing some specially defined auxiliary variables, the global artificial boundary conditions (ABCs) are simplified to local ones so that the computational efficiency is enhanced significantly. The introduced two local ABCs are implemented in a finite element computer program so that various seepage problems can be calculated. The two approaches are first verified by the computation of a one-dimensional radial flow problem, and then tentatively applied to more general two-dimensional cylindrical problems and plane problems. Numerical calculations show that the local ABCs can not only give good results for one-dimensional axisymmetric transient flow, but also applicable for more general problems, such as axisymmetric two-dimensional cylindrical problems, and even more general planar two-dimensional flow problems for well doublet and well groups. An important advantage of the latter local boundary is its applicability for seepage under rapidly changing unsteady boundary conditions, and even the computational results on the truncated boundary are usually quite satisfactory. In this aspect, it is superior over the former local boundary. Simulation of relatively long operational time demonstrates to certain extents the numerical stability of the local boundary. The solutions of the two local ABCs are compared with each other and with those obtained by using large element mesh, which proves the satisfactory performance and obvious superiority over the large mesh model.

Keywords: transient seepage, unbounded domain, artificial boundary condition, numerical simulation

Procedia PDF Downloads 273
100 Investigating the Sloshing Characteristics of a Liquid by Using an Image Processing Method

Authors: Ufuk Tosun, Reza Aghazadeh, Mehmet Bülent Özer

Abstract:

This study puts forward a method to analyze the sloshing characteristics of liquid in a tuned sloshing absorber system by using image processing tools. Tuned sloshing vibration absorbers have recently attracted researchers’ attention as a seismic load damper in constructions due to its practical and logistical convenience. The absorber is liquid which sloshes and applies a force in opposite phase to the motion of structure. Experimentally characterization of the sloshing behavior can be utilized as means of verifying the results of numerical analysis. It can also be used to identify the accuracy of assumptions related to the motion of the liquid. There are extensive theoretical and experimental studies in the literature related to the dynamical and structural behavior of tuned sloshing dampers. In most of these works there are efforts to estimate the sloshing behavior of the liquid such as free surface motion and total force applied by liquid to the wall of container. For these purposes the use of sensors such as load cells and ultrasonic sensors are prevalent in experimental works. Load cells are only capable of measuring the force and requires conducting tests both with and without liquid to obtain pure sloshing force. Ultrasonic level sensors give point-wise measurements and hence they are not applicable to measure the whole free surface motion. Furthermore, in the case of liquid splashing it may give incorrect data. In this work a method for evaluating the sloshing wave height by using camera records and image processing techniques is presented. In this method the motion of the liquid and its container, made of a transparent material, is recorded by a high speed camera which is aligned to the free surface of the liquid. The video captured by the camera is processed frame by frame by using MATLAB Image Processing toolbox. The process starts with cropping the desired region. By recognizing the regions containing liquid and eliminating noise and liquid splashing, the final picture depicting the free surface of liquid is achieved. This picture then is used to obtain the height of the liquid through the length of container. This process is verified by ultrasonic sensors that measured fluid height on the surface of liquid.

Keywords: fluid structure interaction, image processing, sloshing, tuned liquid damper

Procedia PDF Downloads 320