Search results for: circuit models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7433

Search results for: circuit models

4913 The Electric Car Wheel Hub Motor Work Analysis with the Use of 2D FEM Electromagnetic Method and 3D CFD Thermal Simulations

Authors: Piotr Dukalski, Bartlomiej Bedkowski, Tomasz Jarek, Tomasz Wolnik

Abstract:

The article is concerned with the design of an electric in wheel hub motor installed in an electric car with two-wheel drive. It presents the construction of the motor on the 3D cross-section model. Work simulation of the motor (applicated to Fiat Panda car) and selected driving parameters such as driving on the road with a slope of 20%, driving at maximum speed, maximum acceleration of the car from 0 to 100 km/h are considered by the authors in the article. The demand for the drive power taking into account the resistance to movement was determined for selected driving conditions. The parameters of the motor operation and the power losses in its individual elements, calculated using the FEM 2D method, are presented for the selected car driving parameters. The calculated power losses are used in 3D models for thermal calculations using the CFD method. Detailed construction of thermal models with materials data, boundary conditions and losses calculated using the FEM 2D method are presented in the article. The article presents and describes calculated temperature distributions in individual motor components such as winding, permanent magnets, magnetic core, body, cooling system components. Generated losses in individual motor components and their impact on the limitation of its operating parameters are described by authors. Attention is paid to the losses generated in permanent magnets, which are a source of heat as the removal of which from inside the motor is difficult. Presented results of calculations show how individual motor power losses, generated in different load conditions while driving, affect its thermal state.

Keywords: electric car, electric drive, electric motor, thermal calculations, wheel hub motor

Procedia PDF Downloads 174
4912 An Experimental (Wind Tunnel) and Numerical (CFD) Study on the Flow over Hills

Authors: Tanit Daniel Jodar Vecina, Adriane Prisco Petry

Abstract:

The shape of the wind velocity profile changes according to local features of terrain shape and roughness, which are parameters responsible for defining the Atmospheric Boundary Layer (ABL) profile. Air flow characteristics over and around landforms, such as hills, are of considerable importance for applications related to Wind Farm and Turbine Engineering. The air flow is accelerated on top of hills, which can represent a decisive factor for Wind Turbine placement choices. The present work focuses on the study of ABL behavior as a function of slope and surface roughness of hill-shaped landforms, using the Computational Fluid Dynamics (CFD) to build wind velocity and turbulent intensity profiles. Reynolds-Averaged Navier-Stokes (RANS) equations are closed using the SST k-ω turbulence model; numerical results are compared to experimental data measured in wind tunnel over scale models of the hills under consideration. Eight hill models with slopes varying from 25° to 68° were tested for two types of terrain categories in 2D and 3D, and two analytical codes are used to represent the inlet velocity profiles. Numerical results for the velocity profiles show differences under 4% when compared to their respective experimental data. Turbulent intensity profiles show maximum differences around 7% when compared to experimental data; this can be explained by not being possible to insert inlet turbulent intensity profiles in the simulations. Alternatively, constant values based on the averages of the turbulent intensity at the wind tunnel inlet were used.

Keywords: Atmospheric Boundary Layer, Computational Fluid Dynamic (CFD), Numerical Modeling, Wind Tunnel

Procedia PDF Downloads 380
4911 A Tool for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the easy creation of an institutional risk profile for endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support risk factors set up with just the most important values that are important for a particular organisation. Subsequently, the risk profile employs fuzzy models and associated configurations for the file format metadata aggregator to support digital preservation experts with a semi-automatic estimation of endangerment level for file formats. Our goal is to make use of a domain expert knowledge base aggregated from a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation and analysis of risk factors for a requried dimension. The proposed methods improve the visibility of risk factor information and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and automatically aggregated file format metadata from linked open data sources. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: digital information management, file format, endangerment analysis, fuzzy models

Procedia PDF Downloads 404
4910 Luminescent and Conductive Cathode Buffer Layer for Enhanced Power Conversion Efficiency of Bulk-Heterojunction Solar Cells

Authors: Swati Bishnoi, D. Haranath, Vinay Gupta

Abstract:

In this work, we demonstrate that the power conversion efficiency (PCE) of organic solar cells (OSCs) could be improved significantly by using ZnO doped with Aluminum (Al) and Europium (Eu) as cathode buffer layer (CBL). The ZnO:Al,Eu nanoparticle layer has broadband absorption in the ultraviolet (300-400 nm) region. The Al doping contributes to the enhancement in the conductivity whereas Eu doping significantly improves emission in the visible region. Moreover, this emission overlaps with the absorption range of polymer poly [N -9′-heptadecanyl-2,7-carbazole-alt-5,5-(4′,7′-di-2-thienyl-2′,1′,3′- benzothiadiazole)] (PCDTBT) significantly and results in an enhanced absorption by the active layer and hence high photocurrent. An increase in the power conversion efficiency (PCE) of 6.8% has been obtained for ZnO: Al,Eu CBL as compared to 5.9% for pristine ZnO, in the inverted device configuration ITO/CBL/active layer/MoOx/Al. The active layer comprises of a blend of PCDTBT donor and [6-6]-phenyl C71 butyric acid methyl ester (PC71BM) acceptor. In the reference device pristine ZnO has been used as CBL, whereas in the other one ZnO:Al,Eu has been used as CBL. The role of the luminescent CBL layer is to down-shift the UV light into visible range which overlaps with the absorption of PCDTBT polymer, resulting in an energy transfer from ZnO:Al,Eu to PCDTBT polymer and the absorption by active layer is enhanced as revealed by transient spectroscopy. This enhancement resulted in an increase in the short circuit current which contributes in an increased PCE in the device employing ZnO: Al,Eu CBL. Thus, the luminescent ZnO: Al, Eu nanoparticle CBL has great potential in organic solar cells.

Keywords: cathode buffer layer, energy transfer, organic solar cell, power conversion efficiency

Procedia PDF Downloads 256
4909 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review

Authors: Faisal Muhibuddin, Ani Dijah Rahajoe

Abstract:

This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.

Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review

Procedia PDF Downloads 65
4908 StockTwits Sentiment Analysis on Stock Price Prediction

Authors: Min Chen, Rubi Gupta

Abstract:

Understanding and predicting stock market movements is a challenging problem. It is believed stock markets are partially driven by public sentiments, which leads to numerous research efforts to predict stock market trend using public sentiments expressed on social media such as Twitter but with limited success. Recently a microblogging website StockTwits is becoming increasingly popular for users to share their discussions and sentiments about stocks and financial market. In this project, we analyze the text content of StockTwits tweets and extract financial sentiment using text featurization and machine learning algorithms. StockTwits tweets are first pre-processed using techniques including stopword removal, special character removal, and case normalization to remove noise. Features are extracted from these preprocessed tweets through text featurization process using bags of words, N-gram models, TF-IDF (term frequency-inverse document frequency), and latent semantic analysis. Machine learning models are then trained to classify the tweets' sentiment as positive (bullish) or negative (bearish). The correlation between the aggregated daily sentiment and daily stock price movement is then investigated using Pearson’s correlation coefficient. Finally, the sentiment information is applied together with time series stock data to predict stock price movement. The experiments on five companies (Apple, Amazon, General Electric, Microsoft, and Target) in a duration of nine months demonstrate the effectiveness of our study in improving the prediction accuracy.

Keywords: machine learning, sentiment analysis, stock price prediction, tweet processing

Procedia PDF Downloads 156
4907 Attribution Theory and Perceived Reliability of Cellphones for Teaching and Learning

Authors: Mayowa A. Sofowora, Seraphin D. Eyono Obono

Abstract:

The use of information and communication technologies such as computers, mobile phones and the internet is becoming prevalent in today’s world; and it is facilitating access to a vast amount of data, services, and applications for the improvement of people’s lives. However, this prevalence of ICTs is hampered by the problem of low income levels in developing countries to the point where people cannot timeously replace or repair their ICT devices when damaged or lost; and this problem serves as a motivation for this study whose aim is to examine the perceptions of teachers on the reliability of cellphones when used for teaching and learning purposes. The research objectives unfolding this aim are of two types: objectives on the selection and design of theories and models, and objectives on the empirical testing of these theories and models. The first type of objectives is achieved using content analysis in an extensive literature survey, and the second type of objectives is achieved through a survey of high school teachers from the ILembe and Umgungudlovu districts in the KwaZuluNatal province of South Africa. Data collected from this questionnaire based survey is analysed in SPSS using descriptive statistics and Pearson correlations after checking the reliability and validity of the questionnaire. The main hypothesis driving this study is that there is a relationship between the demographics and the attribution identity of teachers on one hand, and their perceptions on the reliability of cellphones on the other hand, as suggested by existing literature; except that attribution identities are considered in this study under three angles: intention, knowledge and ability, and action. The results of this study confirm that the perceptions of teachers on the reliability of cellphones for teaching and learning are affected by the school location of these teachers, and by their perceptions on learners’ cellphones usage intentions and actual use.

Keywords: attribution, cellphones, e-learning, reliability

Procedia PDF Downloads 402
4906 Findings on Modelling Carbon Dioxide Concentration Scenarios in the Nairobi Metropolitan Region before and during COVID-19

Authors: John Okanda Okwaro

Abstract:

Carbon (IV) oxide (CO₂) is emitted majorly from fossil fuel combustion and industrial production. The sources of interest of carbon (IV) oxide in the study area are mining activities, transport systems, and industrial processes. This study is aimed at building models that will help in monitoring the emissions within the study area. Three scenarios were discussed, namely: pessimistic scenario, business-as-usual scenario, and optimistic scenario. The result showed that there was a reduction in carbon dioxide concentration by approximately 50.5 ppm between March 2020 and January 2021 inclusive. This is majorly due to reduced human activities that led to decreased consumption of energy. Also, the CO₂ concentration trend follows the business-as-usual scenario (BAU) path. From the models, the pessimistic, business-as-usual, and optimistic scenarios give CO₂ concentration of about 545.9 ppm, 408.1 ppm, and 360.1 ppm, respectively, on December 31st, 2021. This research helps paint the picture to the policymakers of the relationship between energy sources and CO₂ emissions. Since the reduction in CO₂ emission was due to decreased use of fossil fuel as there was a decrease in economic activities, then if Kenya relies more on green energy than fossil fuel in the post-COVID-19 period, there will be more CO₂ emission reduction. That is, the CO₂ concentration trend is likely to follow the optimistic scenario path, hence a reduction in CO₂ concentration of about 48 ppm by the end of the year 2021. This research recommends investment in solar energy by energy-intensive companies, mine machinery and equipment maintenance, investment in electric vehicles, and doubling tree planting efforts to achieve the 10% cover.

Keywords: forecasting, greenhouse gas, green energy, hierarchical data format

Procedia PDF Downloads 168
4905 Induced Pulsation Attack Against Kalman Filter Driven Brushless DC Motor Control System

Authors: Yuri Boiko, Iluju Kiringa, Tet Yeap

Abstract:

We use modeling and simulation tools, to introduce a novel bias injection attack, named the ’Induced Pulsation Attack’, which targets Cyber Physical Systems with closed-loop controlled Brushless DC (BLDC) motor and Kalman filter driver in the feedback loop. This attack involves engaging a linear function with a constant gradient to distort the coefficient of the injected bias, which falsifies the Kalman filter estimates of the rotor’s angular speed. As a result, this manipulation interaction inside the control system causes periodic pulsations in a form of asymmetric sine wave of both current and voltage in the circuit windings, with a high magnitude. It is shown that by varying the gradient of linear function, one can control both the frequency and structure of the induced pulsations. It is also demonstrated that terminating the attack at any point leads to additional compensating effort from the controller to restore the speed to its equilibrium value. This compensation effort produces an exponentially decaying wave, which we call the ’attack withdrawal syndrome’ wave. The conditions for maximizing or minimizing the impact of the attack withdrawal syndrome are determined. Linking the termination of the attack to the end of the full period of the induced pulsation wave has been shown to nullify the attack withdrawal syndrome wave, thereby improving the attack’s covertness.

Keywords: cyber-attack, induced pulsation, bias injection, Kalman filter, BLDC motor, control system, closed loop, P- controller, PID-controller, saw-function, cyber-physical system

Procedia PDF Downloads 71
4904 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies

Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey

Abstract:

Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.

Keywords: climate change, downscaling, GCM, RCM

Procedia PDF Downloads 406
4903 Improving Tower Grounding and Insulation Level vs. Line Surge Arresters for Protection of Subtransmission Lines

Authors: Navid Eghtedarpour, Mohammad Reza Hasani

Abstract:

Since renewable wind power plants are usually installed in mountain regions and high-level lands, they are often prone to lightning strikes and their hazardous effects. Although the transmission line is protected using guard wires in order to prevent the lightning surges to strike the phase conductors, the back-flashover may also occur due to tower footing resistance. A combination of back-flashover corrective methods, tower-footing resistance reduction, insulation level improvement, and line arrester installation, are analyzed in this paper for back-flashover rate reduction of a double-circuit 63 kV line in the south region of Fars province. The line crosses a mountain region in some sections with a moderate keraunic level, whereas tower-footing resistance is substantially high at some towers. Consequently, an exceptionally high back-flashover rate is recorded. A new method for insulation improvement is studied and employed in the current study. The method consists of using a composite-type creepage extender in the string. The effectiveness of this method for insulation improvement of the string is evaluated through the experimental test. Simulation results besides monitoring the one-year operation of the 63-kV line show that due to technical, practical, and economic restrictions in operated sub-transmission lines, a combination of corrective methods can lead to an effective solution for the protection of transmission lines against lightning.

Keywords: lightning protection, BF rate, grounding system, insulation level, line surge arrester

Procedia PDF Downloads 130
4902 CFD Analysis of the Blood Flow in Left Coronary Bifurcation with Variable Angulation

Authors: Midiya Khademi, Ali Nikoo, Shabnam Rahimnezhad Baghche Jooghi

Abstract:

Cardiovascular diseases (CVDs) are the main cause of death globally. Most CVDs can be prevented by avoiding habitual risk factors. Separate from the habitual risk factors, there are some inherent factors in each individual that can increase the risk potential of CVDs. Vessel shapes and geometry are influential factors, having great impact on the blood flow and the hemodynamic behavior of the vessels. In the present study, the influence of bifurcation angle on blood flow characteristics is studied. In order to approach this topic, by simplifying the details of the bifurcation, three models with angles 30°, 45°, and 60° were created, then by using CFD analysis, the response of these models for stable flow and pulsatile flow was studied. In the conducted simulation in order to eliminate the influence of other geometrical factors, only the angle of the bifurcation was changed and other parameters remained constant during the research. Simulations are conducted under dynamic and stable condition. In the stable flow simulation, a steady velocity of 0.17 m/s at the inlet plug was maintained and in dynamic simulations, a typical LAD flow waveform is implemented. The results show that the bifurcation angle has an influence on the maximum speed of the flow. In the stable flow condition, increasing the angle lead to decrease the maximum flow velocity. In the dynamic flow simulations, increasing the bifurcation angle lead to an increase in the maximum velocity. Since blood flow has pulsatile characteristics, using a uniform velocity during the simulations can lead to a discrepancy between the actual results and the calculated results.

Keywords: coronary artery, cardiovascular disease, bifurcation, atherosclerosis, CFD, artery wall shear stress

Procedia PDF Downloads 164
4901 Voltage Stabilization of Hybrid PV and Battery Systems by Considering Temperature and Irradiance Changes in Standalone Operation

Authors: S. Jalilzadeh, S. M. Mohseni Bonab

Abstract:

Solar and battery energy storage systems are very useful for consumers who live in deprived areas and do not have access to electricity distribution networks. Nowadays one of the problems that photo voltaic systems (PV) have changing of output power in temperature and irradiance variations, which directly affects the load that is connected to photo voltaic systems. In this paper, with considering the fact that the solar array varies with change in temperature and solar power radiation, a voltage stabilizer system of a load connected to photo voltaic array is designed to stabilize the load voltage and to transfer surplus power of the battery. Also, in proposed hybrid system, the needed load power amount is supplemented considering the voltage stabilization in standalone operation for supplying unbalanced AC load. Electrical energy storage system for voltage control and improvement of the performance of PV by a DC/DC converter is connected to the DC bus. The load is also feed by an AC/DC converter. In this paper, when the voltage increases in its reference limit, the battery gets charged by the photo voltaic array and when it decreases in its defined limit, the power gets injected to the DC bus by this battery. The constant of DC bus Voltage is the cause for the reduced harmonics generated by the inverter. In addition, a series of filters are provided in the inverter output in to reduced harmonics. The inverter control circuit is designed that the voltage and frequency of the load remain almost constant at different load conditions. This paper has focused on controlling strategies of converters to improve their performance.

Keywords: photovoltaic array (PV), DC/DC Boost converter, battery converter, inverters control

Procedia PDF Downloads 485
4900 Quantification of E-Waste: A Case Study in Federal University of Espírito Santo, Brazil

Authors: Andressa S. T. Gomes, Luiza A. Souza, Luciana H. Yamane, Renato R. Siman

Abstract:

The segregation of waste of electrical and electronic equipment (WEEE) in the generating source, its characterization (quali-quantitative) and identification of origin, besides being integral parts of classification reports, are crucial steps to the success of its integrated management. The aim of this paper was to count WEEE generation at the Federal University of Espírito Santo (UFES), Brazil, as well as to define sources, temporary storage sites, main transportations routes and destinations, the most generated WEEE and its recycling potential. Quantification of WEEE generated at the University in the years between 2010 and 2015 was performed using data analysis provided by UFES’s sector of assets management. EEE and WEEE flow in the campuses information were obtained through questionnaires applied to the University workers. It was recorded 6028 WEEEs units of data processing equipment disposed by the university between 2010 and 2015. Among these waste, the most generated were CRT screens, desktops, keyboards and printers. Furthermore, it was observed that these WEEEs are temporarily stored in inappropriate places at the University campuses. In general, these WEEE units are donated to NGOs of the city, or sold through auctions (2010 and 2013). As for recycling potential, from the primary processing and further sale of printed circuit boards (PCB) from the computers, the amount collected could reach U$ 27,839.23. The results highlight the importance of a WEEE management policy at the University.

Keywords: solid waste, waste of electrical and electronic equipment, waste management, institutional solid waste generation

Procedia PDF Downloads 260
4899 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 339
4898 Modelling of Pipe Jacked Twin Tunnels in a Very Soft Clay

Authors: Hojjat Mohammadi, Randall Divito, Gary J. E. Kramer

Abstract:

Tunnelling and pipe jacking in very soft soils (fat clays), even with an Earth Pressure Balance tunnel boring machine (EPBM), can cause large ground displacements. In this study, the short-term and long-term ground and tunnel response is predicted for twin, pipe-jacked EPBM 3 meter diameter tunnels with a narrow pillar width. Initial modelling indicated complete closure of the annulus gap at the tail shield onto the centrifugally cast, glass-fiber-reinforced, polymer mortar jacking pipe (FRP). Numerical modelling was employed to simulate the excavation and support installation sequence, examine the ground response during excavation, confirm the adequacy of the pillar width and check the structural adequacy of the installed pipe. In the numerical models, Mohr-Coulomb constitutive model with the effect of unloading was adopted for the fat clays, while for the bedrock layer, the generalized Hoek-Brown was employed. The numerical models considered explicit excavation sequences and different levels of ground convergence prior to support installation. The well-studied excavation sequences made the analysis possible for this study on a very soft clay, otherwise, obtaining the convergency in the numerical analysis would be impossible. The predicted results indicate that the ground displacements around the tunnel and its effect on the pipe would be acceptable despite predictions of large zones of plastic behaviour around the tunnels and within the entire pillar between them due to excavation-induced ground movements.

Keywords: finite element modeling (FEM), pipe-jacked tunneling, very soft clay, EPBM

Procedia PDF Downloads 82
4897 Interactions between Residential Mobility, Car Ownership and Commute Mode: The Case for Melbourne

Authors: Solmaz Jahed Shiran, John Hearne, Tayebeh Saghapour

Abstract:

Daily travel behavior is strongly influenced by the location of the places of residence, education, and employment. Hence a change in those locations due to a move or changes in an occupation leads to a change in travel behavior. Given the interventions of housing mobility and travel behaviors, the hypothesis is that a mobile housing market allows households to move as a result of any change in their life course, allowing them to be closer to central services, public transport facilities and workplace and hence reducing the time spent by individuals on daily travel. Conversely, household’s immobility may lead to longer commutes of residents, for example, after a change of a job or a need for new services such as schools for children who have reached their school age. This paper aims to investigate the association between residential mobility and travel behavior. The Victorian Integrated Survey of Travel and Activity (VISTA) data is used for the empirical analysis. Car ownership and journey to work time and distance of employed people are used as indicators of travel behavior. Change of usual residence within the last five years used to identify movers and non-movers. Statistical analysis, including regression models, is used to compare the travel behavior of movers and non-movers. The results show travel time, and the distance does not differ for movers and non-movers. However, this is not the case when taking into account the residence tenure-type. In addition, car ownership rate and number found to be significantly higher for non-movers. It is hoped that the results from this study will contribute to a better understanding of factors other than common socioeconomic and built environment features influencing travel behavior.

Keywords: journey to work, regression models, residential mobility, commute mode, car ownership

Procedia PDF Downloads 133
4896 Architectural Visualization: From Ancient Civilizations to the Roman Empire

Authors: Matthias Stange

Abstract:

Architectural visualization has been practiced for as long as there have been buildings. Visualization (lat.: visibilis "visible") generally refers to bringing abstract data and relationships into a graphically, visually comprehensible form. Particularly, visualization refers to the process of translating relationships that are difficult to formulate linguistically or logically into visual media (e.g., drawings or models) to make them comprehensible. Building owners have always been interested in knowing how their building will look before it is built. In the empirical part of this study, the roots of architectural visualization are examined, starting from the ancient civilizations to the end of the Roman Empire. Extensive literature research on architectural theory and architectural history forms the basis for this analysis. The focus of the analysis is basic research from the emergence of the first two-dimensional drawings in the Neolithic period to the triggers of significant further developments of architectural representation, as well as their importance for subsequent methods and the transmission of knowledge over the following epochs. The analysis focuses on the development of analog methods of representation from the first Neolithic house floor plans to the Greek detailed stone models and paper drawings in the Roman Empire. In particular, the question of socio-cultural, socio-political, and economic changes as possible triggers for the development of representational media and methods will be analyzed. The study has shown that the development of visual building representation has been driven by scientific, technological, and social developments since the emergence of the first civilizations more than 6000 years ago first by the change in human’s subsistence strategy, from food appropriation by hunting and gathering to food production by agriculture and livestock, and the sedentary lifestyle required for this.

Keywords: ancient Greece, ancient orient, Roman Empire, architectural visualization

Procedia PDF Downloads 116
4895 Development of Electrospun Membranes with Defined Polyethylene Collagen and Oxide Architectures Reinforced with Medium and High Intensity Statins

Authors: S. Jaramillo, Y. Montoya, W. Agudelo, J. Bustamante

Abstract:

Cardiovascular diseases (CVD) are related to affectations of the heart and blood vessels, within these are pathologies such as coronary or peripheral heart disease, caused by the narrowing of the vessel wall (atherosclerosis), which is related to the accumulation of Low-Density Lipoproteins (LDL) in the arterial walls that leads to a progressive reduction of the lumen of the vessel and alterations in blood perfusion. Currently, the main therapeutic strategy for this type of alteration is drug treatment with statins, which inhibit the enzyme 3-hydroxy-3-methyl-glutaryl-CoA reductase (HMG-CoA reductase), responsible for modulating the rate of cholesterol production and other isoprenoids in the mevalonate pathway. This enzyme induces the expression of LDL receptors in the liver, increasing their number on the surface of liver cells, reducing the plasma concentration of cholesterol. On the other hand, when the blood vessel presents stenosis, a surgical procedure with vascular implants is indicated, which are used to restore circulation in the arterial or venous bed. Among the materials used for the development of vascular implants are Dacron® and Teflon®, which perform the function of re-waterproofing the circulatory circuit, but due to their low biocompatibility, they do not have the ability to promote remodeling and tissue regeneration processes. Based on this, the present research proposes the development of a hydrolyzed collagen and polyethylene oxide electrospun membrane reinforced with medium and high-intensity statins, so that in future research it can favor tissue remodeling processes from its microarchitecture.

Keywords: atherosclerosis, medium and high-intensity statins, microarchitecture, electrospun membrane

Procedia PDF Downloads 137
4894 Construction and Validation of a Hybrid Lumbar Spine Model for the Fast Evaluation of Intradiscal Pressure and Mobility

Authors: Dicko Ali Hamadi, Tong-Yette Nicolas, Gilles Benjamin, Faure Francois, Palombi Olivier

Abstract:

A novel hybrid model of the lumbar spine, allowing fast static and dynamic simulations of the disc pressure and the spine mobility, is introduced in this work. Our contribution is to combine rigid bodies, deformable finite elements, articular constraints, and springs into a unique model of the spine. Each vertebra is represented by a rigid body controlling a surface mesh to model contacts on the facet joints and the spinous process. The discs are modeled using a heterogeneous tetrahedral finite element model. The facet joints are represented as elastic joints with six degrees of freedom, while the ligaments are modeled using non-linear one-dimensional elastic elements. The challenge we tackle is to make these different models efficiently interact while respecting the principles of Anatomy and Mechanics. The mobility, the intradiscal pressure, the facet joint force and the instantaneous center of rotation of the lumbar spine are validated against the experimental and theoretical results of the literature on flexion, extension, lateral bending as well as axial rotation. Our hybrid model greatly simplifies the modeling task and dramatically accelerates the simulation of pressure within the discs, as well as the evaluation of the range of motion and the instantaneous centers of rotation, without penalizing precision. These results suggest that for some types of biomechanical simulations, simplified models allow far easier modeling and faster simulations compared to usual full-FEM approaches without any loss of accuracy.

Keywords: hybrid, modeling, fast simulation, lumbar spine

Procedia PDF Downloads 306
4893 Lessons of Passive Environmental Design in the Sarabhai and Shodan Houses by Le Corbusier

Authors: Juan Sebastián Rivera Soriano, Rosa Urbano Gutiérrez

Abstract:

The Shodan House and the Sarabhai House (Ahmedabad, India, 1954 and 1955, respectively) are considered some of the most important works of Le Corbusier produced in the last stage of his career. There are some academic publications that study the compositional and formal aspects of their architectural design, but there is no in-depth investigation into how the climatic conditions of this region were a determining factor in the design decisions implemented in these projects. This paper argues that Le Corbusier developed a specific architectural design strategy for these buildings based on scientific research on climate in the Indian context. This new language was informed by a pioneering study and interpretation of climatic data as a design methodology that would even involve the development of new design tools. This study investigated whether their use of climatic data meets values and levels of accuracy obtained with contemporary instruments and tools, such as Energy Plus weather data files and Climate Consultant. It also intended to find out if Le Corbusier's office’s intentions and decisions were indeed appropriate and efficient for those climate conditions by assessing these projects using BIM models and energy performance simulations from Design Builder. Accurate models were built using original historical data through archival research. The outcome is to provide a new understanding of the environment of these houses through the combination of modern building science and architectural history. The results confirm that in these houses, it was achieved a model of low energy consumption. This paper contributes new evidence not only on exemplary modern architecture concerned with environmental performance but also on how it developed progressive thinking in this direction.

Keywords: bioclimatic architecture, Le Corbusier, Shodan, Sarabhai Houses

Procedia PDF Downloads 65
4892 Dissolution Kinetics of Chevreul’s Salt in Ammonium Cloride Solutions

Authors: Mustafa Sertçelik, Turan Çalban, Hacali Necefoğlu, Sabri Çolak

Abstract:

In this study, Chevreul’s salt solubility and its dissolution kinetics in ammonium chloride solutions were investigated. Chevreul’s salt that we used in the studies was obtained by using the optimum conditions (ammonium sulphide concentration; 0,4 M, copper sulphate concentration; 0,25 M, temperature; 60°C, stirring speed; 600 rev/min, pH; 4 and reaction time; 15 mins) determined by T. Çalban et al. Chevreul’s salt solubility in ammonium chloride solutions and the kinetics of dissolution were investigated. The selected parameters that affect solubility were reaction temperature, concentration of ammonium chloride, stirring speed, and solid/liquid ratio. Correlation of experimental results had been achieved using linear regression implemented in the statistical package program statistica. The effect of parameters on Chevreul’s salt solubility was examined and integrated rate expression of dissolution rate was found using kinetic models in solid-liquid heterogeneous reactions. The results revealed that the dissolution rate of Chevreul’s salt was decreasing while temperature, concentration of ammonium chloride and stirring speed were increasing. On the other hand, dissolution rate was found to be decreasing with the increase of solid/liquid ratio. Based on result of the applications of the obtained experimental results to the kinetic models, we can deduce that Chevreul’s salt dissolution rate is controlled by diffusion through the ash (or product layer). Activation energy of the reaction of dissolution was found as 74.83 kJ/mol. The integrated rate expression along with the effects of parameters on Chevreul's salt solubility was found to be as follows: 1-3(1-X)2/3+2(1-X)= [2,96.1013.(CA)3,08 .(S/L)-038.(W)1,23 e-9001,2/T].t

Keywords: Chevreul's salt, copper, ammonium chloride, ammonium sulphide, dissolution kinetics

Procedia PDF Downloads 308
4891 Analysis of Capillarity Phenomenon Models in Primary and Secondary Education in Spain: A Case Study on the Design, Implementation, and Analysis of an Inquiry-Based Teaching Sequence

Authors: E. Cascarosa-Salillas, J. Pozuelo-Muñoz, C. Rodríguez-Casals, A. de Echave

Abstract:

This study focuses on improving the understanding of the capillarity phenomenon among Primary and Secondary Education students. Despite being a common concept in daily life and covered in various subjects, students’ comprehension remains limited. This work explores inquiry-based teaching methods to build a conceptual foundation of capillarity by examining the forces involved. The study adopts an inquiry-based teaching approach supported by research emphasizing the importance of modeling in science education. Scientific modeling aids students in applying knowledge across varied contexts and developing systemic thinking, allowing them to construct scientific models applicable to everyday situations. This methodology fosters the development of scientific competencies such as observation, hypothesis formulation, and communication. The research was structured as a case study with activities designed for Spanish Primary and Secondary Education students aged 9 to 13. The process included curriculum analysis, the design of an activity sequence, and its implementation in classrooms. Implementation began with questions that students needed to resolve using available materials, encouraging observation, experimentation, and the re-contextualization of activities to everyday phenomena where capillarity is observed. Data collection tools included audio and video recordings of the sessions, which were transcribed and analyzed alongside the students' written work. Students' drawings on capillarity were also collected and categorized. Qualitative analyses of the activities showed that, through inquiry, students managed to construct various models of capillarity, reflecting an improved understanding of the phenomenon. Initial activities allowed students to express prior ideas and formulate hypotheses, which were then refined and expanded in subsequent sessions. The generalization and use of graphical representations of their ideas on capillarity, analyzed alongside their written work, enabled the categorization of capillarity models: Intuitive Model: A visual and straightforward representation without explanations of how or why it occurs. Simple symbolic elements, such as arrows to indicate water rising, are used without detailed or causal understanding. It reflects an initial, immediate perception of the phenomenon, interpreted as something that happens "on its own" without delving into the microscopic level. Explanatory Intuitive Model: Students begin to incorporate causal explanations, though still limited and without complete scientific accuracy. They represent the role of materials and use basic terms such as ‘absorption’ or ‘attraction’ to describe the rise of water. This model shows a more complex understanding where the phenomenon is not only observed but also partially explained in terms of interaction, though without microscopic detail. School Scientific Model: This model reflects a more advanced and detailed understanding. Students represent the phenomenon using specific scientific concepts like ‘surface tension,’ cohesion,’ and ‘adhesion,’ including structured explanations connecting microscopic and macroscopic levels. At this level, students model the phenomenon as a coherent system, demonstrating how various forces or properties interact in the capillarity process, with representations on a microscopic level. The study demonstrated that the capillarity phenomenon can be effectively approached in class through the experimental observation of everyday phenomena, explained through guided inquiry learning. The methodology facilitated students’ construction of capillarity models and served to analyze an interaction phenomenon of different forces occurring at the microscopic level.

Keywords: capillarity, inquiry-based learning, scientific modeling, primary and secondary education, conceptual understanding, Drawing analysis.

Procedia PDF Downloads 14
4890 Kýklos Dimensional Geometry: Entity Specific Core Measurement System

Authors: Steven D. P Moore

Abstract:

A novel method referred to asKýklos(Ky) dimensional geometry is proposed as an entity specific core geometric dimensional measurement system. Ky geometric measures can constructscaled multi-dimensionalmodels using regular and irregular sets in IRn. This entity specific-derived geometric measurement system shares similar fractal methods in which a ‘fractal transformation operator’ is applied to a set S to produce a union of N copies. The Kýklos’ inputs use 1D geometry as a core measure. One-dimensional inputs include the radius interval of a circle/sphere or the semiminor/semimajor axes intervals of an ellipse or spheroid. These geometric inputs have finite values that can be measured by SI distance units. The outputs for each interval are divided and subdivided 1D subcomponents with a union equal to the interval geometry/length. Setting a limit of subdivision iterations creates a finite value for each 1Dsubcomponent. The uniqueness of this method is captured by allowing the simplest 1D inputs to define entity specific subclass geometric core measurements that can also be used to derive length measures. Current methodologies for celestial based measurement of time, as defined within SI units, fits within this methodology, thus combining spatial and temporal features into geometric core measures. The novel Ky method discussed here offers geometric measures to construct scaled multi-dimensional structures, even models. Ky classes proposed for consideration include celestial even subatomic. The application of this offers incredible possibilities, for example, geometric architecture that can represent scaled celestial models that incorporates planets (spheroids) and celestial motion (elliptical orbits).

Keywords: Kyklos, geometry, measurement, celestial, dimension

Procedia PDF Downloads 166
4889 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 128
4888 Climate Changes in Albania and Their Effect on Cereal Yield

Authors: Lule Basha, Eralda Gjika

Abstract:

This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.

Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest

Procedia PDF Downloads 91
4887 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales

Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias

Abstract:

Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.

Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline

Procedia PDF Downloads 80
4886 Assessment of Climate Change Impact on Meteorological Droughts

Authors: Alireza Nikbakht Shahbazi

Abstract:

There are various factors that affect climate changes; drought is one of those factors. Investigation of efficient methods for estimating climate change impacts on drought should be assumed. The aim of this paper is to investigate climate change impacts on drought in Karoon3 watershed located south-western Iran in the future periods. The atmospheric general circulation models (GCM) data under Intergovernmental Panel on Climate Change (IPCC) scenarios should be used for this purpose. In this study, watershed drought under climate change impacts will be simulated in future periods (2011 to 2099). Standard precipitation index (SPI) as a drought index was selected and calculated using mean monthly precipitation data in Karoon3 watershed. SPI was calculated in 6, 12 and 24 months periods. Statistical analysis on daily precipitation and minimum and maximum daily temperature was performed. LRAS-WG5 was used to determine the feasibility of future period's meteorological data production. Model calibration and verification was performed for the base year (1980-2007). Meteorological data simulation for future periods under General Circulation Models and climate change IPCC scenarios was performed and then the drought status using SPI under climate change effects analyzed. Results showed that differences between monthly maximum and minimum temperature will decrease under climate change and spring precipitation shall increase while summer and autumn rainfall shall decrease. The precipitation occurs mainly between January and May in future periods and summer or autumn precipitation decline and lead up to short term drought in the study region. Normal and wet SPI category is more frequent in B1 and A2 emissions scenarios than A1B.

Keywords: climate change impact, drought severity, drought frequency, Karoon3 watershed

Procedia PDF Downloads 240
4885 Nano-Filled Matrix Reinforced by Woven Carbon Fibers Used as a Sensor

Authors: K. Hamdi, Z. Aboura, W. Harizi, K. Khellil

Abstract:

Improving the electrical properties of organic matrix composites has been investigated in several studies. Thus, to extend the use of composites in more varied application, one of the actual barrier is their poor electrical conductivities. In the case of carbon fiber composites, organic matrix are in charge of the insulating properties of the resulting composite. However, studying the properties of continuous carbon fiber nano-filled composites is less investigated. This work tends to characterize the effect of carbon black nano-fillers on the properties of the woven carbon fiber composites. First of all, SEM observations were performed to localize the nano-particles. It showed that particles penetrated on the fiber zone (figure1). In fact, by reaching the fiber zone, the carbon black nano-fillers created network connectivity between fibers which means an easy pathway for the current. It explains the noticed improvement of the electrical conductivity of the composites by adding carbon black. This test was performed with the four points electrical circuit. It shows that electrical conductivity of 'neat' matrix composite passed from 80S/cm to 150S/cm by adding 9wt% of carbon black and to 250S/cm by adding 17wt% of the same nano-filler. Thanks to these results, the use of this composite as a strain gauge might be possible. By the way, the study of the influence of a mechanical excitation (flexion, tensile) on the electrical properties of the composite by recording the variance of an electrical current passing through the material during the mechanical testing is possible. Three different configuration were performed depending on the rate of carbon black used as nano-filler. These investigation could lead to develop an auto-instrumented material.

Keywords: carbon fibers composites, nano-fillers, strain-sensors, auto-instrumented

Procedia PDF Downloads 411
4884 Implementation of Free-Field Boundary Condition for 2D Site Response Analysis in OpenSees

Authors: M. Eskandarighadi, C. R. McGann

Abstract:

It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristics experience at the site. One-dimensional seismic site response analysis is the most common approach for investigating site response. This approach assumes that soil is homogeneous and infinitely extended in the horizontal direction. Therefore, tying side boundaries together is one way to model this behavior, as the wave passage is assumed to be only vertical. However, 1D analysis cannot capture the 2D nature of wave propagation, soil heterogeneity, and 2D soil profile with features such as inclined layer boundaries. In contrast, 2D seismic site response modeling can consider all of the mentioned factors to better understand local site effects on strong ground motions. 2D wave propagation and considering that the soil profile on the two sides of the model may not be identical clarifies the importance of a boundary condition on each side that can minimize the unwanted reflections from the edges of the model and input appropriate loading conditions. Ideally, the model size should be sufficiently large to minimize the wave reflection, however, due to computational limitations, increasing the model size is impractical in some cases. Another approach is to employ free-field boundary conditions that take into account the free-field motion that would exist far from the model domain and apply this to the sides of the model. This research focuses on implementing free-field boundary conditions in OpenSees for 2D site response analysisComparisons are made between 1D models and 2D models with various boundary conditions, and details and limitations of the developed free-field boundary modeling approach are discussed.

Keywords: boundary condition, free-field, opensees, site response analysis, wave propagation

Procedia PDF Downloads 158