Search results for: Linear Equalizers.
92 Construction and Validation of a Hybrid Lumbar Spine Model for the Fast Evaluation of Intradiscal Pressure and Mobility
Authors: Ali Hamadi Dicko, Nicolas Tong-Yette, Benjamin Gilles, François Faure, Olivier Palombi
Abstract:
A novel hybrid model of the lumbar spine, allowing fast static and dynamic simulations of the disc pressure and the spine mobility, is introduced in this work. Our contribution is to combine rigid bodies, deformable finite elements, articular constraints, and springs into a unique model of the spine. Each vertebra is represented by a rigid body controlling a surface mesh to model contacts on the facet joints and the spinous process. The discs are modeled using a heterogeneous tetrahedral finite element model. The facet joints are represented as elastic joints with six degrees of freedom, while the ligaments are modeled using non-linear one-dimensional elastic elements. The challenge we tackle is to make these different models efficiently interact while respecting the principles of Anatomy and Mechanics. The mobility, the intradiscal pressure, the facet joint force and the instantaneous center of rotation of the lumbar spine are validated against the experimental and theoretical results of the literature on flexion, extension, lateral bending as well as axial rotation. Our hybrid model greatly simplifies the modeling task and dramatically accelerates the simulation of pressure within the discs, as well as the evaluation of the range of motion and the instantaneous centers of rotation, without penalizing precision. These results suggest that for some types of biomechanical simulations, simplified models allow far easier modeling and faster simulations compared to usual full-FEM approaches without any loss of accuracy.
Keywords: Hybrid, modeling, fast simulation, lumbar spine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 238291 Explicit Solution of an Investment Plan for a DC Pension Scheme with Voluntary Contributions and Return Clause under Logarithm Utility
Authors: Promise A. Azor, Avievie Igodo, Esabai M. Ase
Abstract:
The paper merged the return of premium clause and voluntary contributions to investigate retirees’ investment plan in a defined contributory (DC) pension scheme with a portfolio comprising of a risk-free asset and a risky asset whose price process is described by geometric Brownian motion (GBM). The paper considers additional voluntary contributions paid by members, charge on balance by pension fund administrators and the mortality risk of members of the scheme during the accumulation period by introducing return of premium clause. To achieve this, the Weilbull mortality force function is used to establish the mortality rate of members during accumulation phase. Furthermore, an optimization problem from the Hamilton Jacobi Bellman (HJB) equation is obtained using dynamic programming approach. Also, the Legendre transformation method is used to transform the HJB equation which is a nonlinear partial differential equation to a linear partial differential equation and solves the resultant equation for the value function and the optimal distribution plan under logarithm utility function. Finally, numerical simulations of the impact of some important parameters on the optimal distribution plan were obtained and it was observed that the optimal distribution plan is inversely proportional to the initial fund size, predetermined interest rate, additional voluntary contributions, charge on balance and instantaneous volatility.
Keywords: Legendre transform, logarithm utility, optimal distribution plan, return clause of premium, charge on balance, Weibull mortality function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20890 Generating a Functional Grammar for Architectural Design from Structural Hierarchy in Combination of Square and Equal Triangle
Authors: Sanaz Ahmadzadeh Siyahrood, Arghavan Ebrahimi, Mohammadjavad Mahdavinejad
Abstract:
Islamic culture was accountable for a plethora of development in astronomy and science in the medieval term, and in geometry likewise. Geometric patterns are reputable in a considerable number of cultures, but in the Islamic culture the patterns have specific features that connect the Islamic faith to mathematics. In Islamic art, three fundamental shapes are generated from the circle shape: triangle, square and hexagon. Originating from their quiddity, each of these geometric shapes has its own specific structure. Even though the geometric patterns were generated from such simple forms as the circle and the square, they can be combined, duplicated, interlaced, and arranged in intricate combinations. So in order to explain geometrical interaction principles between square and equal triangle, in the first definition step, all types of their linear forces individually and in the second step, between them, would be illustrated. In this analysis, some angles will be created from intersection of their directions. All angles are categorized to some groups and the mathematical expressions among them are analyzed. Since the most geometric patterns in Islamic art and architecture are based on the repetition of a single motif, the evaluation results which are obtained from a small portion, is attributable to a large-scale domain while the development of infinitely repeating patterns can represent the unchanging laws. Geometric ornamentation in Islamic art offers the possibility of infinite growth and can accommodate the incorporation of other types of architectural layout as well, so the logic and mathematical relationships which have been obtained from this analysis are applicable in designing some architecture layers and developing the plan design.
Keywords: Angle, architecture, design, equal triangle, generating, grammar, square and structural hierarchy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 89589 Application of Design Thinking for Technology Transfer of Remotely Piloted Aircraft Systems for the Creative Industry
Authors: V. Santamarina Campos, M. de Miguel Molina, B. de Miguel Molina, M. Á. Carabal Montagud
Abstract:
With this contribution, we want to show a successful example of the application of the Design Thinking methodology, in the European project 'Technology transfer of Remotely Piloted Aircraft Systems (RPAS) for the creative industry'. The use of this methodology has allowed us to design and build a drone, based on the real needs of prospective users. It has demonstrated that this is a powerful tool for generating innovative ideas in the field of robotics, by focusing its effectiveness on understanding and solving real user needs. In this way, with the support of an interdisciplinary team, comprised of creatives, engineers and economists, together with the collaboration of prospective users from three European countries, a non-linear work dynamic has been created. This teamwork has generated a sense of appreciation towards the creative industries, through continuously adaptive, inventive, and playful collaboration and communication, which has facilitated the development of prototypes. These have been designed to enable filming and photography in interior spaces, within 13 sectors of European creative industries: Advertising, Architecture, Fashion, Film, Antiques and Museums, Music, Photography, Televison, Performing Arts, Publishing, Arts and Crafts, Design and Software. Furthermore, it has married the real needs of the creative industries, with what is technologically and commercially viable. As a result, a product of great value has been obtained, which offers new business opportunities for small companies across this sector.
Keywords: Design thinking, design for effectiveness, methodology, active toolkit, storyboards, storytelling, PAR, focus group, innovation, RPAS, indoor drone, robotics, TRL, aerial film, creative industries, end-users.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 117488 Measurements of MRI R2* Relaxation Rate in Liver and Muscle: Animal Model
Authors: Chiung-Yun Chang, Po-Chou Chen, Jiun-Shiang Tzeng, Ka-Wai Mac, Chia-Chi Hsiao, Jo-Chi Jao
Abstract:
This study was aimed to measure effective transverse relaxation rates (R2*) in the liver and muscle of normal New Zealand White (NZW) rabbits. R2* relaxation rate has been widely used in various hepatic diseases for iron overload by quantifying iron contents in liver. R2* relaxation rate is defined as the reciprocal of T2* relaxation time and mainly depends on the constituents of tissue. Different tissues would have different R2* relaxation rates. The signal intensity decay in Magnetic resonance imaging (MRI) may be characterized by R2* relaxation rates. In this study, a 1.5T GE Signa HDxt whole body MR scanner equipped with an 8-channel high resolution knee coil was used to observe R2* values in NZW rabbit’s liver and muscle. Eight healthy NZW rabbits weighted 2 ~ 2.5 kg were recruited. After anesthesia using Zoletil 50 and Rompun 2% mixture, the abdomen of rabbit was landmarked at the center of knee coil to perform 3-plane localizer scan using fast spoiled gradient echo (FSPGR) pulse sequence. Afterwards, multi-planar fast gradient echo (MFGR) scans were performed with 8 various echo times (TEs) to acquire images for R2* measurements. Regions of interest (ROIs) at liver and muscle were measured using Advantage workstation. Finally, the R2* was obtained by a linear regression of ln(sı) on TE. The results showed that the longer the echo time, the smaller the signal intensity. The R2* values of liver and muscle were 44.8 ± 10.9 s-1 and 37.4 ± 9.5 s-1, respectively. It implies that the iron concentration of liver is higher than that of muscle. In conclusion, the more the iron contents in tissue, the higher the R2*. The correlations between R2* and iron content in NZW rabbits might be valuable for further exploration.Keywords: Liver, MRI, multi-planar fast gradient echo, muscle, R2* relaxation rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 215087 Experimental Investigation of Heat Transfer and Flow of Nano Fluids in Horizontal Circular Tube
Authors: Abdulhassan Abd. K, Sattar Al-Jabair, Khalid Sultan
Abstract:
We have measured the pressure drop and convective heat transfer coefficient of water – based AL(25nm),AL2O3(30nm) and CuO(50nm) Nanofluids flowing through a uniform heated circular tube in the fully developed laminar flow regime. The experimental results show that the data for Nanofluids friction factor show a good agreement with analytical prediction from the Darcy's equation for single-phase flow. After reducing the experimental results to the form of Reynolds, Rayleigh and Nusselt numbers. The results show the local Nusselt number and temperature have distribution with the non-dimensional axial distance from the tube entry. Study decided that thenNanofluid as Newtonian fluids through the design of the linear relationship between shear stress and the rate of stress has been the study of three chains of the Nanofluid with different concentrations and where the AL, AL2O3 and CuO – water ranging from (0.25 - 2.5 vol %). In addition to measuring the four properties of the Nanofluid in practice so as to ensure the validity of equations of properties developed by the researchers in this area and these properties is viscosity, specific heat, and density and found that the difference does not exceed 3.5% for the experimental equations between them and the practical. The study also demonstrated that the amount of the increase in heat transfer coefficient for three types of Nano fluid is AL, AL2O3, and CuO – Water and these ratios are respectively (45%, 32%, 25%) with insulation and without insulation (36%, 23%, 19%), and the statement of any of the cases the best increase in heat transfer has been proven that using insulation is better than not using it. I have been using three types of Nano particles and one metallic Nanoparticle and two oxide Nanoparticle and a statement, whichever gives the best increase in heat transfer.Keywords: Newtonian, NUR factor, Brownian motion
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 186086 Estimating Saturated Hydraulic Conductivity from Soil Physical Properties using Neural Networks Model
Authors: B. Ghanbarian-Alavijeh, A.M. Liaghat, S. Sohrabi
Abstract:
Saturated hydraulic conductivity is one of the soil hydraulic properties which is widely used in environmental studies especially subsurface ground water. Since, its direct measurement is time consuming and therefore costly, indirect methods such as pedotransfer functions have been developed based on multiple linear regression equations and neural networks model in order to estimate saturated hydraulic conductivity from readily available soil properties e.g. sand, silt, and clay contents, bulk density, and organic matter. The objective of this study was to develop neural networks (NNs) model to estimate saturated hydraulic conductivity from available parameters such as sand and clay contents, bulk density, van Genuchten retention model parameters (i.e. r θ , α , and n) as well as effective porosity. We used two methods to calculate effective porosity: : (1) eff s FC φ =θ -θ , and (2) inf φ =θ -θ eff s , in which s θ is saturated water content, FC θ is water content retained at -33 kPa matric potential, and inf θ is water content at the inflection point. Total of 311 soil samples from the UNSODA database was divided into three groups as 187 for the training, 62 for the validation (to avoid over training), and 62 for the test of NNs model. A commercial neural network toolbox of MATLAB software with a multi-layer perceptron model and back propagation algorithm were used for the training procedure. The statistical parameters such as correlation coefficient (R2), and mean square error (MSE) were also used to evaluate the developed NNs model. The best number of neurons in the middle layer of NNs model for methods (1) and (2) were calculated 44 and 6, respectively. The R2 and MSE values of the test phase were determined for method (1), 0.94 and 0.0016, and for method (2), 0.98 and 0.00065, respectively, which shows that method (2) estimates saturated hydraulic conductivity better than method (1).Keywords: Neural network, Saturated hydraulic conductivity, Soil physical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 255785 A Study of Shear Stress Intensity Factor of PP and HDPE by a Modified Experimental Method together with FEM
Authors: Md. Shafiqul Islam, Abdullah Khan, Sharon Kao-Walter, Li Jian
Abstract:
Shear testing is one of the most complex testing areas where available methods and specimen geometries are different from each other. Therefore, a modified shear test specimen (MSTS) combining the simple uniaxial test with a zone of interest (ZOI) is tested which gives almost the pure shear. In this study, material parameters of polypropylene (PP) and high density polyethylene (HDPE) are first measured by tensile tests with a dogbone shaped specimen. These parameters are then used as an input for the finite element analysis. Secondly, a specially designed specimen (MSTS) is used to perform the shear stress tests in a tensile testing machine to get the results in terms of forces and extension, crack initiation etc. Scanning Electron Microscopy (SEM) is also performed on the shear fracture surface to find material behavior. These experiments are then simulated by finite element method and compared with the experimental results in order to confirm the simulation model. Shear stress state is inspected to find the usability of the proposed shear specimen. Finally, a geometry correction factor can be established for these two materials in this specific loading and geometry with notch using Linear Elastic Fracture Mechanics (LEFM). By these results, strain energy of shear failure and stress intensity factor (SIF) of shear of these two polymers are discussed in the special application of the screw cap opening of the medical or food packages with a temper evidence safety solution.
Keywords: Shear test specimen, Stress intensity factor, Finite Element simulation, Scanning electron microscopy, Screw cap opening.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 292484 An Overall Approach to the Communication of Organizations in Conventional and Virtual Offices
Authors: Mehmet Altınöz
Abstract:
Organizational communication is an administrative function crucial especially for executives in the implementation of organizational and administrative functions. Executives spend a significant part of their time on communicative activities. Doing his or her daily routine, arranging meeting schedules, speaking on the telephone, reading or replying to business correspondence, or fulfilling the control functions within the organization, an executive typically engages in communication processes. Efficient communication is the principal device for the adequate implementation of administrative and organizational activities. For this purpose, management needs to specify the kind of communication system to be set up and the kind of communication devices to be used. Communication is vital for any organization. In conventional offices, communication takes place within the hierarchical pyramid called the organizational structure, and is known as formal or informal communication. Formal communication is the type that works in specified structures within the organizational rules and towards the organizational goals. Informal communication, on the other hand, is the unofficial type taking place among staff as face-to-face or telephone interaction. Communication in virtual as well as conventional offices is essential for obtaining the right information in administrative activities and decision-making. Virtual communication technologies increase the efficiency of communication especially in virtual teams. Group communication is strengthened through an inter-group central channel. Further, ease of information transmission makes it possible to reach the information at the source, allowing efficient and correct decisions. Virtual offices can present as a whole the elements of information which conventional offices produce in different environments. At present, virtual work has become a reality with its pros and cons, and will probably spread very rapidly in coming years, in line with the growth in information technologies.Keywords: Organization, conventional office, virtual office, communication, communication model, communication functions, communication methods, vertical communication, linear communication, diagonal communication
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 316383 A Case Study on the Efficacy of Technical Laboratory Safety in Polytechnic
Authors: Zulhisyam Salleh, Erita M. Mazlan, Saiful A. Mazlan, Norzainariah A. Hassan, Fizatul A. Patakor
Abstract:
Technical laboratories are typically considered as highly hazardous places in the polytechnic institution when addressing the problems of high incidences and fatality rates. In conjunction with several topics covered in the technical curricular, safety and health precaution should be highlighted in order to connect to few key ideas of being safe. Therefore the assessment of safety awareness in terms of safety and health about hazardous and risks at laboratories is needed and has to be incorporated with technical education and other training programmes. The purpose of this study was to determine the efficacy of technical laboratory safety in one of the polytechnics in northern region. The study examined three related issues that were; the availability of safety material and equipment, safety practice adopted by technical teachers and administrator-s safety attitudes in enforcing safety to the students. A model of efficacy technical laboratory was developed to test the linear relationship between existing safety material and equipment, teachers- safety practice and administrators- attitude in enforcing safety and to identify which of technical laboratory safety issues was the most pertinent factor to realize safety in technical laboratory. This was done by analyzing survey-based data sets particularly those obtained from samples of 210 students in the polytechnic. The Pearson Correlation was used to measure the association between the variables and to test the research hypotheses. The result of the study has found that there was a significant correlation between existing safety material and equipment, safety practice adopted by teacher and administrator-s attitude. There was also a significant relationship between technical laboratory safety and safety practice adopted by teacher and between technical laboratory safety and administrator attitude. Hence, safety practice adopted by teacher and administrator attitude is vital in realizing technical laboratory safety.Keywords: Polytechnic, Safety attitudes, Safety practices, Technical laboratory
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 242782 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race
Authors: Joonas Pääkkönen
Abstract:
In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83381 Collapse Load Analysis of Reinforced Concrete Pile Group in Liquefying Soils under Lateral Loading
Authors: Pavan K. Emani, Shashank Kothari, V. S. Phanikanth
Abstract:
The ultimate load analysis of RC pile groups has assumed a lot of significance under liquefying soil conditions, especially due to post-earthquake studies of 1964 Niigata, 1995 Kobe and 2001 Bhuj earthquakes. The present study reports the results of numerical simulations on pile groups subjected to monotonically increasing lateral loads under design amounts of pile axial loading. The soil liquefaction has been considered through the non-linear p-y relationship of the soil springs, which can vary along the depth/length of the pile. This variation again is related to the liquefaction potential of the site and the magnitude of the seismic shaking. As the piles in the group can reach their extreme deflections and rotations during increased amounts of lateral loading, a precise modeling of the inelastic behavior of the pile cross-section is done, considering the complete stress-strain behavior of concrete, with and without confinement, and reinforcing steel, including the strain-hardening portion. The possibility of the inelastic buckling of the individual piles is considered in the overall collapse modes. The model is analysed using Riks analysis in finite element software to check the post buckling behavior and plastic collapse of piles. The results confirm the kinds of failure modes predicted by centrifuge test results reported by researchers on pile group, although the pile material used is significantly different from that of the simulation model. The extension of the present work promises an important contribution to the design codes for pile groups in liquefying soils.Keywords: Collapse load analysis, inelastic buckling, liquefaction, pile group.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90380 Estimating Spatial Disaggregation of Urban Thermal Responsiveness on Summer Diurnal Range with a Numerical Modeling Approach in Bangkok, Thailand
Authors: Manat Srivanit, Hokao Kazunori
Abstract:
Facing the concern of the population to its environment and to climatic change, city planners are now considering the urban climate in their choices of planning. The urban climate, representing different urban morphologies across central Bangkok metropolitan area (BMA), are used to investigates the effects of both the composition and configuration of variables of urban morphology indicators on the summer diurnal range of urban climate, using correlation analyses and multiple linear regressions. Results show first indicate that approximately 92.6% of the variation in the average maximum daytime near-surface air temperature (Ta) was explained jointly by the two composition variables of urban morphology indicators including open space ratio (OSR) and floor area ratio (FAR). It has been possible to determine the membership of sample areas to the local climate zones (LCZs) using these urban morphology descriptors automatically computed with GIS and remote sensed data. Finally result found the temperature differences among zones of large separation, such as the city center could be respectively from 35.48±1.04ºC (Mean±S.D.) warmer than the outskirt of Bangkok on average for maximum daytime near surface temperature to 28.27±0.21ºC for extreme event and, can exceed as 8ºC. A spatially disaggregation of urban thermal responsiveness map would be helpful for several reasons. First, it would localize urban areas concerned by different climate behavior over summer daytime and be a good indicator of urban climate variability. Second, when overlaid with a land cover map, this map may contribute to identify possible urban management strategies to reduce heat wave effects in BMA.
Keywords: Urban climate, Urban morphology, Local climate zone, Urban planning, GIS and remote sensing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 246479 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.
Keywords: Harmonics, passive filter, power factor, power quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 219178 Efficient Real-time Remote Data Propagation Mechanism for a Component-Based Approach to Distributed Manufacturing
Authors: V. Barot, S. McLeod, R. Harrison, A. A. West
Abstract:
Manufacturing Industries face a crucial change as products and processes are required to, easily and efficiently, be reconfigurable and reusable. In order to stay competitive and flexible, situations also demand distribution of enterprises globally, which requires implementation of efficient communication strategies. A prototype system called the “Broadcaster" has been developed with an assumption that the control environment description has been engineered using the Component-based system paradigm. This prototype distributes information to a number of globally distributed partners via an adoption of the circular-based data processing mechanism. The work highlighted in this paper includes the implementation of this mechanism in the domain of the manufacturing industry. The proposed solution enables real-time remote propagation of machine information to a number of distributed supply chain client resources such as a HMI, VRML-based 3D views and remote client instances regardless of their distribution nature and/ or their mechanisms. This approach is presented together with a set of evaluation results. Authors- main concentration surrounds the reliability and the performance metric of the adopted approach. Performance evaluation is carried out in terms of the response times taken to process the data in this domain and compared with an alternative data processing implementation such as the linear queue mechanism. Based on the evaluation results obtained, authors justify the benefits achieved from this proposed implementation and highlight any further research work that is to be carried out.
Keywords: Broadcaster, circular buffer, Component-based, distributed manufacturing, remote data propagation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 137377 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.
Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16576 Methane versus Carbon Dioxide: Mitigation Prospects
Authors: Alexander J. Severinsky, Allen L. Sessoms
Abstract:
Atmospheric carbon dioxide (CO2) has dominated the discussion around the causes of climate change. This is a reflection of a 100-year time horizon for all greenhouse gases that became a norm. The 100-year time horizon is much too long – and yet, almost all mitigation efforts, including those set in the near-term frame of within 30 years, are still geared toward it. In this paper, we show that for a 30-year time horizon, methane (CH4) is the greenhouse gas whose radiative forcing exceeds that of CO2. In our analysis, we use the radiative forcing of greenhouse gases in the atmosphere, because they directly affect the rise in temperature on Earth. We found that in 2019, the radiative forcing (RF) of methane was ~2.5 W/m2 and that of carbon dioxide was ~2.1 W/m2. Under a business-as-usual (BAU) scenario until 2050, such forcing would be ~2.8 W/m2 and ~3.1 W/m2 respectively. There is a substantial spread in the data for anthropogenic and natural methane (CH4) emissions, along with natural gas, (which is primarily CH4), leakages from industrial production to consumption. For this reason, we estimate the minimum and maximum effects of a reduction of these leakages, and assume an effective immediate reduction by 80%. Such action may serve to reduce the annual radiative forcing of all CH4 emissions by ~15% to ~30%. This translates into a reduction of RF by 2050 from ~2.8 W/m2 to ~2.5 W/m2 in the case of the minimum effect that can be expected, and to ~2.15 W/m2 in the case of the maximum effort to reduce methane leakages. Under the BAU, we find that the RF of CO2 will increase from ~2.1 W/m2 now to ~3.1 W/m2 by 2050. We assume a linear reduction of 50% in anthropogenic emission over the course of the next 30 years, which would reduce the radiative forcing of CO2 from ~3.1 W/m2 to ~2.9 W/m2. In the case of "net zero," the other 50% of only anthropogenic CO2 emissions reduction would be limited to being either from sources of emissions or directly from the atmosphere. In this instance, the total reduction would be from ~3.1 W/m2 to ~2.7 W/m2, or ~0.4 W/m2. To achieve the same radiative forcing as in the scenario of maximum reduction of methane leakages of ~2.15 W/m2, an additional reduction of radiative forcing of CO2 would be approximately 2.7 -2.15 = 0.55 W/m2. In total, one would need to remove ~660 GT of CO2 from the atmosphere in order to match the maximum reduction of current methane leakages, and ~270 GT of CO2 from emitting sources, to reach "negative emissions". This amounts to over 900 GT of CO2.
Keywords: Methane Leakages, Methane Radiative Forcing, Methane Mitigation, Methane Net Zero.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64675 Assessing the Suitability of South African Waste Foundry Sand as an Additive in Clay Masonry Products
Authors: Nthabiseng Portia Mahumapelo, Andre van Niekerk, Ndabenhle Sosibo, Nirdesh Singh
Abstract:
The foundry industry generates large quantities of solid waste in the form of waste foundry sand. The ever-increasing quantities of this type of industrial waste put pressure on land-filling space and its proper management has become a global concern. The South African foundry industry is not different when it comes to this solid waste generation. Utilizing the foundry waste sand in other applications has become an attractive avenue to deal with this waste stream. In the present paper, an evaluation was done on the suitability of foundry waste sand as an additive in clay masonry products. Purchased clay was added to the foundry waste sand sample in a 50/50 ratio. The mixture was named FC sample. The FC sample was mixed with water in a pan mixer until the mixture was consistent and suitable for extrusion. The FC sample was extruded and cut into briquettes. Water absorption, shrinkage and modulus of rupture tests were conducted on the resultant briquettes. Foundry waste sand and FC samples were respectively characterized mineralogically using X-Ray Diffraction, and the major and trace elements were determined using Inductively Coupled Plasma Optical Emission Spectroscopy. Adding purchased clay to the foundry waste sand positively influenced the workability of the test sample. Another positive characteristic was the low linear shrinkage, which indicated that products manufactured from the FC sample would not be susceptible to cracking. The water absorption values were acceptable and the unfired and fired strength values of the briquette’s samples were acceptable. In conclusion, tests showed that foundry waste sand can be used as an additive in masonry clay bricks, provided it is blended with good quality clay.
Keywords: Foundry waste sand, masonry clay bricks, modulus of rupture, shrinkage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66074 Sustainability Assessment of a Deconstructed Residential House
Authors: Atiq U. Zaman, Juliet Arnott
Abstract:
This paper analyses the various benefits and barriers of residential deconstruction in the context of environmental performance and circular economy based on a case study project in Christchurch, New Zealand. The case study project “Whole House Deconstruction” which aimed, firstly, to harvest materials from a residential house, secondly, to produce new products using the recovered materials, and thirdly, to organize an exhibition for the local public to promote awareness on resource conservation and sustainable deconstruction practices. Through a systematic deconstruction process, the project recovered around 12 tonnes of various construction materials, most of which would otherwise be disposed of to landfill in the traditional demolition approach. It is estimated that the deconstruction of a similar residential house could potentially prevent around 27,029 kg of carbon emission to the atmosphere by recovering and reusing the building materials. In addition, the project involved local designers to produce 400 artefacts using the recovered materials and to exhibit them to accelerate public awareness. The findings from this study suggest that the deconstruction project has significant environmental benefits, as well as social benefits by involving the local community and unemployed youth as a part of their professional skills development opportunities. However, the project faced a number of economic and institutional challenges. The study concludes that with proper economic models and appropriate institutional support a significant amount of construction and demolition waste can be reduced through a systematic deconstruction process. Traditionally, the greatest benefits from such projects are often ignored and remain unreported to wider audiences as most of the external and environmental costs have not been considered in the traditional linear economy.
Keywords: Circular economy, construction and demolition waste, resource recovery, systematic deconstruction, sustainable waste management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 111273 Synthesis of Highly Sensitive Molecular Imprinted Sensor for Selective Determination of Doxycycline in Honey Samples
Authors: Nadia El Alami El Hassani, Soukaina Motia, Benachir Bouchikhi, Nezha El Bari
Abstract:
Doxycycline (DXy) is a cycline antibiotic, most frequently prescribed to treat bacterial infections in veterinary medicine. However, its broad antimicrobial activity and low cost, lead to an intensive use, which can seriously affect human health. Therefore, its spread in the food products has to be monitored. The scope of this work was to synthetize a sensitive and very selective molecularly imprinted polymer (MIP) for DXy detection in honey samples. Firstly, the synthesis of this biosensor was performed by casting a layer of carboxylate polyvinyl chloride (PVC-COOH) on the working surface of a gold screen-printed electrode (Au-SPE) in order to bind covalently the analyte under mild conditions. Secondly, DXy as a template molecule was bounded to the activated carboxylic groups, and the formation of MIP was performed by a biocompatible polymer by the mean of polyacrylamide matrix. Then, DXy was detected by measurements of differential pulse voltammetry (DPV). A non-imprinted polymer (NIP) prepared in the same conditions and without the use of template molecule was also performed. We have noticed that the elaborated biosensor exhibits a high sensitivity and a linear behavior between the regenerated current and the logarithmic concentrations of DXy from 0.1 pg.mL−1 to 1000 pg.mL−1. This technic was successfully applied to determine DXy residues in honey samples with a limit of detection (LOD) of 0.1 pg.mL−1 and an excellent selectivity when compared to the results of oxytetracycline (OXy) as analogous interfering compound. The proposed method is cheap, sensitive, selective, simple, and is applied successfully to detect DXy in honey with the recoveries of 87% and 95%. Considering these advantages, this system provides a further perspective for food quality control in industrial fields.Keywords: Electrochemical sensor, molecular imprinted polymer, doxycycline, food control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 117472 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty
Authors: D. S. Gomes, A. T. Silva
Abstract:
Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.Keywords: Logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101871 The Loess Regression Relationship Between Age and BMI for both Sydney World Masters Games Athletes and the Australian National Population
Authors: Joe Walsh, Mike Climstein, Ian Timothy Heazlewood, Stephen Burke, Jyrki Kettunen, Kent Adams, Mark DeBeliso
Abstract:
Thousands of masters athletes participate quadrennially in the World Masters Games (WMG), yet this cohort of athletes remains proportionately under-investigated. Due to a growing global obesity pandemic in context of benefits of physical activity across the lifespan, the BMI trends for this unique population was of particular interest. The nexus between health, physical activity and aging is complex and has raised much interest in recent times due to the realization that a multifaceted approach is necessary in order to counteract the obesity pandemic. By investigating age based trends within a population adhering to competitive sport at older ages, further insight might be gleaned to assist in understanding one of many factors influencing this relationship.BMI was derived using data gathered on a total of 6,071 masters athletes (51.9% male, 48.1% female) aged 25 to 91 years ( =51.5, s =±9.7), competing at the Sydney World Masters Games (2009). Using linear and loess regression it was demonstrated that the usual tendency for prevalence of higher BMI increasing with age was reversed in the sample. This trend in reversal was repeated for both male and female only sub-sets of the sample participants, indicating the possibility of improved prevalence of BMI with increasing age for both the sample as a whole and these individual sub-groups.This evidence of improved classification in one index of health (reduced BMI) for masters athletes (when compared to the general population) implies there are either improved levels of this index of health with aging due to adherence to sport or possibly the reduced BMI is advantageous and contributes to this cohort adhering (or being attracted) to masters sport at older ages.Keywords: Aging, masters athlete, Quetelet Index, sport
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 171270 Layer-by-Layer Deposition of Poly (Ethylene Imine) Nanolayers on Polypropylene Nonwoven Fabric. Electrostatic and Thermal Properties
Authors: Dawid Stawski, Silviya Halacheva, Dorota Zielińska
Abstract:
The surface properties of many materials can be readily and predictably modified by the controlled deposition of thin layers containing appropriate functional groups and this research area is now a subject of widespread interest. The layer-by-layer (lbl) method involves depositing oppositely charged layers of polyelectrolytes onto the substrate material which are stabilized due to strong electrostatic forces between adjacent layers. This type of modification affords products that combine the properties of the original material with the superficial parameters of the new external layers. Through an appropriate selection of the deposited layers, the surface properties can be precisely controlled and readily adjusted in order to meet the requirements of the intended application. In the presented paper a variety of anionic (poly(acrylic acid)) and cationic (linear poly(ethylene imine), polymers were successfully deposited onto the polypropylene nonwoven using the lbl technique. The chemical structure of the surface before and after modification was confirmed by reflectance FTIR spectroscopy, volumetric analysis and selective dyeing tests. As a direct result of this work, new materials with greatly improved properties have been produced. For example, following a modification process significant changes in the electrostatic activity of a range of novel nanocomposite materials were observed. The deposition of polyelectrolyte nanolayers was found to strongly accelerate the loss of electrostatically generated charges and to increase considerably the thermal resistance properties of the modified fabric (the difference in T50% is over 20oC). From our results, a clear relationship between the type of polyelectrolyte layer deposited onto the flat fabric surface and the properties of the modified fabric was identified.
Keywords: Layer-by-layer technique, polypropylene nonwoven, surface modification, surface properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 250669 Necessary Condition to Utilize Adaptive Control in Wind Turbine Systems to Improve Power System Stability
Authors: Javad Taherahmadi, Mohammad Jafarian, Mohammad Naser Asefi
Abstract:
The global capacity of wind power has dramatically increased in recent years. Therefore, improving the technology of wind turbines to take different advantages of this enormous potential in the power grid, could be interesting subject for scientists. The doubly-fed induction generator (DFIG) wind turbine is a popular system due to its many advantages such as the improved power quality, high energy efficiency and controllability, etc. With an increase in wind power penetration in the network and with regard to the flexible control of wind turbines, the use of wind turbine systems to improve the dynamic stability of power systems has been of significance importance for researchers. Subsynchronous oscillations are one of the important issues in the stability of power systems. Damping subsynchronous oscillations by using wind turbines has been studied in various research efforts, mainly by adding an auxiliary control loop to the control structure of the wind turbine. In most of the studies, this control loop is composed of linear blocks. In this paper, simple adaptive control is used for this purpose. In order to use an adaptive controller, the convergence of the controller should be verified. Since adaptive control parameters tend to optimum values in order to obtain optimum control performance, using this controller will help the wind turbines to have positive contribution in damping the network subsynchronous oscillations at different wind speeds and system operating points. In this paper, the application of simple adaptive control in DFIG wind turbine systems to improve the dynamic stability of power systems is studied and the essential condition for using this controller is considered. It is also shown that this controller has an insignificant effect on the dynamic stability of the wind turbine, itself.
Keywords: Almost strictly positive real, doubly-fed induction generator, simple adaptive control, subsynchronous oscillations, wind turbine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112668 Investigation of the Properties of Epoxy Modified Binders Based on Epoxy Oligomer with Improved Deformation and Strength Properties
Authors: Hlaing Zaw Oo, N. Kostromina, V. Osipchik, T. Kravchenko, K. Yakovleva
Abstract:
The process of modification of ed-20 epoxy resin synthesized by vinyl-containing compounds is considered. It is shown that the introduction of vinyl-containing compounds into the composition based on epoxy resin ED-20 allows adjusting the technological and operational characteristics of the binder. For improvement of the properties of epoxy resin, following modifiers were selected: polyvinylformalethyl, polyvinyl butyral and composition of linear and aromatic amines (Аramine) as a hardener. Now the big range of hardeners of epoxy resins exists that allows varying technological properties of compositions, and also thermophysical and strength indicators. The nature of the aramin type hardener has a significant impact on the spatial parameters of the mesh, glass transition temperature, and strength characteristics. Epoxy composite materials based on ED-20 modified with polyvinyl butyral were obtained and investigated. It is shown that the composition of resins based on derivatives of polyvinyl butyral and ED-20 allows obtaining composite materials with a higher complex of deformation-strength, adhesion and thermal properties, better water resistance, frost resistance, chemical resistance, and impact strength. The magnitude of the effect depends on the chemical structure, temperature and curing time. In the area of concentrations, where the effect of composite synergy is appearing, the values of strength and stiffness significantly exceed the similar parameters of the individual components of the mixture. The polymer-polymer compositions form their class of materials with diverse specific properties that ensure their competitive application. Coatings with high performance under cyclic loading have been obtained based on epoxy oligomers modified with vinyl-containing compounds.Keywords: Epoxy resins, modification, vinyl-containing compounds, deformation and strength properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 58667 Chatter Stability Characterization of Full-Immersion End-Milling Using a Generalized Modified Map of the Full-Discretization Method, Part 1: Validation of Results and Study of Stability Lobes by Numerical Simulation
Authors: Chigbogu G. Ozoegwu, Sam N. Omenyi
Abstract:
The objective in this work is to generate and discuss the stability results of fully-immersed end-milling process with parameters; tool mass m=0.0431kg,tool natural frequency ωn = 5700 rads^-1, damping factor ξ=0.002 and workpiece cutting coefficient C=3.5x10^7 Nm^-7/4. Different no of teeth is considered for the end-milling. Both 1-DOF and 2-DOF chatter models of the system are generated on the basis of non-linear force law. Chatter stability analysis is carried out using a modified form (generalized for both 1-DOF and 2-DOF models) of recently developed method called Full-discretization. The full-immersion three tooth end-milling together with higher toothed end-milling processes has secondary Hopf bifurcation lobes (SHBL’s) that exhibit one turning (minimum) point each. Each of such SHBL is demarcated by its minimum point into two portions; (i) the Lower Spindle Speed Portion (LSSP) in which bifurcations occur in the right half portion of the unit circle centred at the origin of the complex plane and (ii) the Higher Spindle Speed Portion (HSSP) in which bifurcations occur in the left half portion of the unit circle. Comments are made regarding why bifurcation lobes should generally get bigger and more visible with increase in spindle speed and why flip bifurcation lobes (FBL’s) could be invisible in the low-speed stability chart but visible in the high-speed stability chart of the fully-immersed three-tooth miller.
Keywords: Chatter, flip bifurcation, modified full-discretization map stability lobe, secondary Hopf bifurcation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 183266 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)
Authors: Mingren Shi, Michael Renton
Abstract:
There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.
Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 158465 Adoption and Use of an Electronic Voting System in Ghana
Authors: Isaac Kofi Mensah
Abstract:
The manual system of voting has been the most widely used system of electing representatives around the globe, particularly in Africa. Due to the known numerous problems and challenges associated with the manual system of voting, many countries are migrating to the electronic voting system as a suitable and credible means of electing representatives over the manual paper-based system. This research paper therefore investigated the factors influencing adoption and use of an electronic voting system in Ghana. A total of 400 Questionnaire Instruments (QI) were administered to potential respondents in Ghana, of which 387 responded representing a response rate of 96.75%. The Technology Acceptance Model was used as the theoretical framework for the study. The research model was tested using a simple linear regression analysis with SPSS. A little of over 71.1% of the respondents recommended the Electoral Commission (EC) of Ghana to adopt an electronic voting system in the conduct of public elections in Ghana. The results indicated that all the six predictors such as perceived usefulness (PU), perceived ease of use (PEOU), perceived free and fair elections (PFFF), perceived credible elections (PCE), perceived system integrity (PSI) and citizens trust in the election management body (CTEM) were all positively significant in predicting the readiness of citizens to adopt and use an electronic voting system in Ghana. However, jointly, the hypotheses tested revealed that apart from Perceived Free and Fair Elections and Perceived Credible and Transparent Elections, all the other factors such as PU, Perceived System Integrity and Security and Citizen Trust in the Election Management Body were found to be significant predictors of the Willingness of Ghanaians to use an electronic voting system. All the six factors considered in this study jointly account for about 53.1% of the reasons determining the readiness to adopt and use an electronic voting system in Ghana. The implications of this research finding on elections in Ghana are discussed.
Keywords: Credible elections, democracy, Election Management Body (EMB), electronic voting, Ghana, Technology Acceptance Model (TAM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 151464 Auto-Calibration and Optimization of Large-Scale Water Resources Systems
Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari
Abstract:
Water resource systems modeling has constantly been a challenge through history for human beings. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.
Keywords: Auto-calibration, Gilan, Large-Scale Water Resources, Simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179563 'Performance-Based' Seismic Methodology and Its Application in Seismic Design of Reinforced Concrete Structures
Authors: Jelena R. Pejović, Nina N. Serdar
Abstract:
This paper presents an analysis of the “Performance-Based” seismic design method, in order to overcome the perceived disadvantages and limitations of the existing seismic design approach based on force, in engineering practice. Bearing in mind, the specificity of the earthquake as a load and the fact that the seismic resistance of the structures solely depends on its behaviour in the nonlinear field, traditional seismic design approach based on force and linear analysis is not adequate. “Performance-Based” seismic design method is based on nonlinear analysis and can be used in everyday engineering practice. This paper presents the application of this method to eight-story high reinforced concrete building with combined structural system (reinforced concrete frame structural system in one direction and reinforced concrete ductile wall system in other direction). The nonlinear time-history analysis is performed on the spatial model of the structure using program Perform 3D, where the structure is exposed to forty real earthquake records. For considered building, large number of results were obtained. It was concluded that using this method we could, with a high degree of reliability, evaluate structural behavior under earthquake. It is obtained significant differences in the response of structures to various earthquake records. Also analysis showed that frame structural system had not performed well at the effect of earthquake records on soil like sand and gravel, while a ductile wall system had a satisfactory behavior on different types of soils.
Keywords: Ductile wall, frame system, nonlinear time-history analysis, performance-based methodology, RC building.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495