Search results for: rectified linear unit
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5415

Search results for: rectified linear unit

4005 Comparison of the Existing Damage Indices in Steel Moment-Resisting Frame Structures

Authors: Hamid Kazemi, Abbasali Sadeghi

Abstract:

Assessment of seismic behavior of frame structures is just done for evaluating life and financial damages or lost. The new structural seismic behavior assessment methods have been proposed, so it is necessary to define a formulation as a damage index, which the damage amount has been quantified and qualified. In this paper, four new steel moment-resisting frames with intermediate ductility and different height (2, 5, 8, and 12-story) with regular geometry and simple rectangular plan were supposed and designed. The three existing groups’ damage indices were studied, each group consisting of local index (Drift, Maximum Roof Displacement, Banon Failure, Kinematic, Banon Normalized Cumulative Rotation, Cumulative Plastic Rotation and Ductility), global index (Roufaiel and Meyer, Papadopoulos, Sozen, Rosenblueth, Ductility and Base Shear), and story (Banon Failure and Inter-story Rotation). The necessary parameters for these damage indices have been calculated under the effect of far-fault ground motion records by Non-linear Dynamic Time History Analysis. Finally, prioritization of damage indices is defined based on more conservative values in terms of more damageability rate. The results show that the selected damage index has an important effect on estimation of the damage state. Also, failure, drift, and Rosenblueth damage indices are more conservative indices respectively for local, story and global damage indices.

Keywords: damage index, far-fault ground motion records, non-linear time history analysis, SeismoStruct software, steel moment-resisting frame

Procedia PDF Downloads 284
4004 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 29
4003 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model

Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi

Abstract:

Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.

Keywords: flight control clearance, LFR, stability analysis, robustness analysis

Procedia PDF Downloads 345
4002 Hydromagnetic Linear Instability Analysis of Giesekus Fluids in Taylor-Couette Flow

Authors: K. Godazandeh, K. Sadeghy

Abstract:

In the present study, the effect of magnetic field on the hydrodynamic instability of Taylor-Couette flow between two concentric rotating cylinders has been numerically investigated. At the beginning the basic flow has been solved using continuity, Cauchy equations (with regards to Lorentz force) and the constitutive equations of a viscoelastic model called "Giesekus" model. Small perturbations, considered to be normal mode, have been superimposed to the basic flow and the unsteady perturbation equations have been derived consequently. Neglecting non-linear terms, the general eigenvalue problem obtained has been solved using pseudo spectral method (combination of Chebyshev polynomials). The objective of the calculations is to study the effect of magnetic fields on the onset of first mode of instability (axisymmetric mode) for different dimensionless parameters of the flow. The results show that the stability picture is highly influenced by the magnetic field. When magnetic field increases, it first has a destabilization effect which changes to stabilization effect due to more increase of magnetic fields. Therefor there is a critical magnetic number (Hartmann number) for instability of Taylor-Couette flow. Also, the effect of magnetic field is more dominant in large gaps. Also based on the results obtained, magnetic field shows a more considerable effect on the stability at higher Weissenberg numbers (at higher elasticity), while the "mobility factor" changes show no dominant role on the intense of suction and injection effect on the flow's instability.

Keywords: magnetic field, Taylor-Couette flow, Giesekus model, pseudo spectral method, Chebyshev polynomials, Hartmann number, Weissenberg number, mobility factor

Procedia PDF Downloads 382
4001 Rule-Of-Mixtures: Predicting the Bending Modulus of Unidirectional Fiber Reinforced Dental Composites

Authors: Niloofar Bahramian, Mohammad Atai, Mohammad Reza Naimi-Jamal

Abstract:

Rule of mixtures is the simple analytical model is used to predict various properties of composites before design. The aim of this study was to demonstrate the benefits and limitations of the Rule-of-Mixtures (ROM) for predicting bending modulus of a continuous and unidirectional fiber reinforced composites using in dental applications. The Composites were fabricated from light curing resin (with and without silica nanoparticles) and modified and non-modified fibers. Composite samples were divided into eight groups with ten specimens for each group. The bending modulus (flexural modulus) of samples was determined from the slope of the initial linear region of stress-strain curve on 2mm×2mm×25mm specimens with different designs: fibers corona treatment time (0s, 5s, 7s), fibers silane treatment (0%wt, 2%wt), fibers volume fraction (41%, 33%, 25%) and nanoparticles incorporation in resin (0%wt, 10%wt, 15%wt). To study the fiber and matrix interface after fracture, single edge notch beam (SENB) method and scanning electron microscope (SEM) were used. SEM also was used to show the nanoparticles dispersion in resin. Experimental results of bending modulus for composites made of both physical (corona) and chemical (silane) treated fibers were in reasonable agreement with linear ROM estimates, but untreated fibers or non-optimized treated fibers and poor nanoparticles dispersion did not correlate as well with ROM results. This study shows that the ROM is useful to predict the mechanical behavior of unidirectional dental composites but fiber-resin interface and quality of nanoparticles dispersion play important role in ROM accurate predictions.

Keywords: bending modulus, fiber reinforced composite, fiber treatment, rule-of-mixtures

Procedia PDF Downloads 266
4000 The Duty of Sea Carrier to Transship the Cargo in Case of Vessel Breakdown

Authors: Mojtaba Eshraghi Arani

Abstract:

Concluding the contract for carriage of cargo with the shipper (through bill of lading or charterparty), the carrier must transport the cargo from loading port to the port of discharge and deliver it to the consignee. Unless otherwise agreed in the contract, the carrier must avoid from any deviation, transfer of cargo to another vessel or unreasonable stoppage of carriage in-transit. However, the vessel might break down in-transit for any reason and becomes unable to continue its voyage to the port of discharge. This is a frequent incident in the carriage of goods by sea which leads to important dispute between the carrier/owner and the shipper/charterer (hereinafter called “cargo interests”). It is a generally accepted rule that in such event, the carrier/owner must repair the vessel after which it will continue its voyage to the destination port. The dispute will arise in the case that temporary repair of the vessel cannot be done in the short or reasonable term. There are two options for the contract parties in such a case: First, the carrier/owner is entitled to repair the vessel while having the cargo onboard or discharged in the port of refugee, and the cargo interests must wait till the breakdown is rectified at any time, whenever. Second, the carrier/owner will be responsible to charter another vessel and transfer the entirety of cargo to the substitute vessel. In fact, the main question revolves around the duty of carrier/owner to perform transfer of cargo to another vessel. Such operation which is called “trans-shipment” or “transhipment” (in terms of the oil industry it is usually called “ship-to-ship” or “STS”) needs to be done carefully and with due diligence. In fact, the transshipment operation for various cargoes might be different as each cargo requires its own suitable equipment for transfer to another vessel, so this operation is often costly. Moreover, there is a considerable risk of collision between two vessels in particular in bulk carriers. Bulk cargo is also exposed to the shortage and partial loss in the process of transshipment especially during bad weather. Concerning tankers which carry oil and petrochemical products, transshipment, is most probably followed by sea pollution. On the grounds of the above consequences, the owners are afraid of being held responsible for such operation and are reluctant to perform in the relevant disputes. The main argument raised by them is that no regulation has recognized such duty upon their shoulders so any such operation must be done under the auspices of the cargo interests and all costs must be reimbursed by themselves. Unfortunately, not only the international conventions including Hague rules, Hague-Visby Rules, Hamburg rules and Rotterdam rules but also most domestic laws are silent in this regard. The doctrine has yet to analyse the issue and no legal researches was found out in this regard. A qualitative method with the concept of interpretation of data collection has been used in this paper. The source of the data is the analysis of regulations and cases. It is argued in this article that the paramount rule in the maritime law is “the accomplishment of the voyage” by the carrier/owner in view of which, if the voyage can only be finished by transshipment, then the carrier/owner will be responsible to carry out this operation. The duty of carrier/owner to apply “due diligence” will strengthen this reasoning. Any and all costs and expenses will also be on the account pf the owner/carrier, unless the incident is attributable to any cause arising from the cargo interests’ negligence.

Keywords: cargo, STS, transshipment, vessel, voyage

Procedia PDF Downloads 108
3999 Enhanced Performance of Supercapacitor Based on Boric Acid Doped Polyvinyl Alcohol-H₂SO₄ Gel Polymer Electrolyte System

Authors: Hamide Aydin, Banu Karaman, Ayhan Bozkurt, Umran Kurtan

Abstract:

Recently, Proton Conducting Gel Polymer Electrolytes (GPEs) have drawn much attention in supercapacitor applications due to their physical and electrochemical characteristics and stability conditions for low temperatures. In this research, PVA-H2SO4-H3BO3 GPE has been used for electric-double layer capacitor (EDLCs) application, in which electrospun free-standing carbon nanofibers are used as electrodes. Introduced PVA-H2SO4-H3BO3 GPE behaves as both separator and the electrolyte in the supercapacitor. Symmetric Swagelok cells including GPEs were assembled via using two electrode arrangements and the electrochemical properties were searched. Electrochemical performance studies demonstrated that PVA-H2SO4-H3BO3 GPE had a maximum specific capacitance (Cs) of 134 F g-1 and showed great capacitance retention (%100) after 1000 charge/discharge cycles. Furthermore, PVA-H2SO4-H3BO3 GPE yielded an energy density of 67 Wh kg-1 with a corresponding power density of 1000 W kg-1 at a current density of 1 A g-1. PVA-H2SO4 based polymer electrolyte was produced according to following procedure; Firstly, 1 g of commercial PVA was dissolved in distilled water at 90°C and stirred until getting transparent solution. This was followed by addition of the diluted H2SO4 (1 g of H2SO4 in a distilled water) to the solution to obtain PVA-H2SO4. PVA-H2SO4-H3BO3 based polymer electrolyte was produced by dissolving H3BO3 in hot distilled water and then inserted into the PVA-H2SO4 solution. The mole fraction was arranged to ¼ of the PVA repeating unit. After the stirring 2 h at RT, gel polymer electrolytes were obtained. The final electrolytes for supercapacitor testing included 20% of water in weight. Several blending combinations of PVA/H2SO4 and H3BO3 were studied to observe the optimized combination in terms of conductivity as well as electrolyte stability. As the amount of boric acid increased in the matrix, excess sulfuric acid was excluded due to cross linking, especially at lower solvent content. This resulted in the reduction of proton conductivity. Therefore, the mole fraction of H3BO3 was chosen as ¼ of PVA repeating unit. Within this optimized limits, the polymer electrolytes showed better conductivities as well as stability.

Keywords: electrical double layer capacitor, energy density, gel polymer electrolyte, ultracapacitor

Procedia PDF Downloads 209
3998 Climate Changes in Albania and Their Effect on Cereal Yield

Authors: Lule Basha, Eralda Gjika

Abstract:

This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.

Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest

Procedia PDF Downloads 82
3997 Improved Performance of AlGaN/GaN HEMTs Using N₂/NH₃ Pretreatment before Passivation

Authors: Yifan Gao

Abstract:

Owing to the high breakdown field, high saturation drift velocity, 2DEG with high density and mobility and so on, AlGaN/GaN HEMTs have been widely used in high-frequency and high-power applications. To acquire a higher power often means higher breakdown voltage and higher drain current. Surface leakage current is usually the key issue affecting the breakdown voltage and power performance. In this work, we have performed in-situ N₂/NH₃ pretreatment before the passivation to suppress the surface leakage and achieve device performance enhancement. The AlGaN/GaN HEMT used in this work was grown on a 3-in. SiC substrate, whose epitaxial structure consists of a 3.5-nm GaN cap layer, a 25-nm Al₀.₂₅GaN barrier layer, a 1-nm AlN layer, a 400-nm i-GaN layer and a buffer layer. In order to analyze the mechanism for the N-based pretreatment, the details are measured by XPS analysis. It is found that the intensity of Ga-O bonds is decreasing and the intensity of Ga-N bonds is increasing, which means with the supplement of N, the dangling bonds on the surface are indeed reduced with the forming of Ga-N bonds, reducing the surface states. The surface states have a great influence on the leakage current, and improved surface states represent a better off-state of the device. After the N-based pretreatment, the breakdown voltage of the device with Lₛ𝒹=6 μm increased from 93V to 170V, which increased by 82.8%. Moreover, for HEMTs with Lₛ𝒹 of 6-μm, we can obtain a peak output power (Pout) of 12.79W/mm, power added efficiency (PAE) of 49.84% and a linear gain of 20.2 dB at 60V under 3.6GHz. Comparing the result with the reference 6-μm device, Pout is increased by 16.5%. Meanwhile, PAE and the linear gain also have a slight increase. The experimental results indicate that using N₂/NH₃ pretreatment before passivation is an attractive approach to achieving power performance enhancement.

Keywords: AlGaN/GaN HEMT, N-based pretreatment, output power, passivation

Procedia PDF Downloads 310
3996 The Sustainability of Public Debt in Taiwan

Authors: Chiung-Ju Huang

Abstract:

This study examines whether the Taiwan’s public debt is sustainable utilizing an unrestricted two-regime threshold autoregressive (TAR) model with an autoregressive unit root. The empirical results show that Taiwan’s public debt appears as a nonlinear series and is stationary in regime 1 but not in regime 2. This result implies that while Taiwan’s public debt was mostly sustainable over the 1996 to 2013 period examined in the study, it may no longer be sustainable in the most recent two years as the public debt ratio has increased cumulatively to 3.618%.

Keywords: nonlinearity, public debt, sustainability, threshold autoregressive model

Procedia PDF Downloads 441
3995 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro-Grids

Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone

Abstract:

Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.

Keywords: short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, gain

Procedia PDF Downloads 457
3994 Comparison of Peri- and Post-Operative Outcomes of Three Left Atrial Incisions: Conventional Direct, Transseptal and Superior Septal Left Atriotomy

Authors: Estelle Démoulin, Dionysios Adamopoulos, Tornike Sologashvili, Mathieu Van Steenberghe, Jalal Jolou, Haran Burri, Christoph Huber, Mustafa Cikirikcioglu

Abstract:

Background & objective: Mitral valve surgeries are mainly performed by median sternotomy with conventional direct atriotomy. Good exposure to the mitral valve is challenging, especially for acute pathologies, where left atrium dilation does not occur. Other atriotomies, such as transseptal or superior septal, are used as they allow better access and visualization. Peri- and postoperative outcomes of these three different left atriotomies were compared. Methods: Patients undergoing mitral valve surgery between January 2010 and December 2020 were included and divided into three groups: group 1 (conventional direct, n=115), group 2 (transseptal, n=33) and group 3 (superior septal, n=59). To improve the sampling size, all patients underwent mitral valve surgery with or without associated procedures (CABG, aortic-tricuspid surgery, Maze procedure). The study protocol was approved by SwissEthics. Results: No difference was shown for the etiology of mitral valve disease, except endocarditis, which was more frequent in group 3 (p = 0.014). Elective surgeries and isolated mitral valve surgery were more frequent in group 1 (p = 0.008, p = 0.011) and aortic clamping and cardiopulmonary bypass were shorter (p = 0.002, p<0.001). Group 3 had more emergency procedures (p = 0.011) and longer lengths of intensive care unit and hospital stay (p = 0.000, p = 0.003). There was no difference in permanent pacemaker implantation, postoperative complications and mortality between the groups. Conclusion: Mitral valve surgeries can be safely performed using those three left atriotomies. Conventional direct may lead to shorter aortic clamping and cardiopulmonary bypass times. Superior septal is mostly used for acute pathologies, and it does not increase postoperative arrhythmias and permanent pacemaker implantation. However, intensive care unit and hospital lengths of stay were found to be longer in this group. In our opinion, this outcome is more related to the pathology and type of surgery than the incision itself.

Keywords: Mitral valve surgery, cardiac surgery, atriotomy, Operative outcomes

Procedia PDF Downloads 66
3993 Rehabilitation Team after Brain Damages as Complex System Integrating Consciousness

Authors: Olga Maksakova

Abstract:

A work with unconscious patients after acute brain damages besides special knowledge and practical skills of all the participants requires a very specific organization. A lot of said about team approach in neurorehabilitation, usually as for outpatient mode. Rehabilitologists deal with fixed patient problems or deficits (motion, speech, cognitive or emotional disorder). Team-building means superficial paradigm of management psychology. Linear mode of teamwork fits casual relationships there. Cases with deep altered states of consciousness (vegetative states, coma, and confusion) require non-linear mode of teamwork: recovery of consciousness might not be the goal due to phenomenon uncertainty. Rehabilitation team as Semi-open Complex System includes the patient as a part. Patient's response pattern becomes formed not only with brain deficits but questions-stimuli, context, and inquiring person. Teamwork is sourcing of phenomenology knowledge of patient's processes as Third-person approach is replaced with Second- and after First-person approaches. Here is a chance for real-time change. Patient’s contacts with his own body and outward things create a basement for restoration of consciousness. The most important condition is systematic feedbacks to any minimal movement or vegetative signal of the patient. Up to now, recovery work with the most severe contingent is carried out in the mode of passive physical interventions, while an effective rehabilitation team should include specially trained psychologists and psychotherapists. It is they who are able to create a network of feedbacks with the patient and inter-professional ones building up the team. Characteristics of ‘Team-Patient’ system (TPS) are energy, entropy, and complexity. Impairment of consciousness as the absence of linear contact appears together with a loss of essential functions (low energy), vegetative-visceral fits (excessive energy and low order), motor agitation (excessive energy and excessive order), etc. Techniques of teamwork are different in these cases for resulting optimization of the system condition. Directed regulation of the system complexity is one of the recovery tools. Different signs of awareness appear as a result of system self-organization. Joint meetings are an important part of teamwork. Regular or event-related discussions form the language of inter-professional communication, as well as the patient's shared mental model. Analysis of complex communication process in TPS may be useful for creation of the general theory of consciousness.

Keywords: rehabilitation team, urgent rehabilitation, severe brain damage, consciousness disorders, complex system theory

Procedia PDF Downloads 135
3992 Analyzing the Heat Transfer Mechanism in a Tube Bundle Air-PCM Heat Exchanger: An Empirical Study

Authors: Maria De Los Angeles Ortega, Denis Bruneau, Patrick Sebastian, Jean-Pierre Nadeau, Alain Sommier, Saed Raji

Abstract:

Phase change materials (PCM) present attractive features that made them a passive solution for thermal comfort assessment in buildings during summer time. They show a large storage capacity per volume unit in comparison with other structural materials like bricks or concrete. If their use is matched with the peak load periods, they can contribute to the reduction of the primary energy consumption related to cooling applications. Despite these promising characteristics, they present some drawbacks. Commercial PCMs, as paraffines, offer a low thermal conductivity affecting the overall performance of the system. In some cases, the material can be enhanced, adding other elements that improve the conductivity, but in general, a design of the unit that optimizes the thermal performance is sought. The material selection is the departing point during the designing stage, and it does not leave plenty of room for optimization. The PCM melting point depends highly on the atmospheric characteristics of the building location. The selection must relay within the maximum, and the minimum temperature reached during the day. The geometry of the PCM container and the geometrical distribution of these containers are designing parameters, as well. They significantly affect the heat transfer, and therefore its phenomena must be studied exhaustively. During its lifetime, an air-PCM unit in a building must cool down the place during daytime, while the melting of the PCM occurs. At night, the PCM must be regenerated to be ready for next uses. When the system is not in service, a minimal amount of thermal exchanges is desired. The aforementioned functions result in the presence of sensible and latent heat storage and release. Hence different types of mechanisms drive the heat transfer phenomena. An experimental test was designed to study the heat transfer phenomena occurring in a circular tube bundle air-PCM exchanger. An in-line arrangement was selected as the geometrical distribution of the containers. With the aim of visual identification, the containers material and a section of the test bench were transparent. Some instruments were placed on the bench for measuring temperature and velocity. The PCM properties were also available through differential scanning calorimeter (DSC) tests. An evolution of the temperature during both cycles, melting and solidification were obtained. The results showed some phenomena at a local level (tubes) and on an overall level (exchanger). Conduction and convection appeared as the main heat transfer mechanisms. From these results, two approaches to analyze the heat transfer were followed. The first approach described the phenomena in a single tube as a series of thermal resistances, where a pure conduction controlled heat transfer was assumed in the PCM. For the second approach, the temperature measurements were used to find some significant dimensionless numbers and parameters as Stefan, Fourier and Rayleigh numbers, and the melting fraction. These approaches allowed us to identify the heat transfer phenomena during both cycles. The presence of natural convection during melting might have been stated from the influence of the Rayleigh number on the correlations obtained.

Keywords: phase change materials, air-PCM exchangers, convection, conduction

Procedia PDF Downloads 167
3991 Assessing the Actual Status and Farmer’s Attitude towards Agroforestry in Chiniot, Pakistan

Authors: M. F. Nawaz, S. Gul, T. H. Farooq, M. T. Siddiqui, M. Asif, I. Ahmad, N. K. Niazi

Abstract:

In Pakistan, major demands of fuel wood and timber wood are fulfilled by agroforestry. However, the information regarding economic significance of agroforestry and its productivity in Pakistan is still insufficient and unreliable. Survey of field conditions to examine the agroforestry status at local level helps us to know the future trends and to formulate the policies for sustainable wood supply. The objectives of this research were to examine the actual status and potential of agroforestry and to point out the barriers that are faced by farmers in the adoption of agroforestry. Research was carried out in Chiniot district, Pakistan because it is the famous city for furniture industry that is largely dependent on farm trees. A detailed survey of district Chiniot was carried out from 150 randomly selected farmer respondents using multi-objective oriented and pre-tested questionnaire. It was found that linear tree planting method was more adopted (45%) as compared to linear + interplanting (42%) and/or compact planting (12.6%). Chi-square values at P-value <0.5 showed that age (11.35) and education (17.09) were two more important factors in the quick adoption of agroforestry as compared to land holdings (P-value of 0.7). The major reason of agroforestry adoption was to obtain income, fodder and fuelwood. The most dominant species in farmlands was shisham (Dalbergia sissoo) but since last five years, mostly farmers were growing Sufeida (Eucalyptus camaldulensis), kikar (Acacia nilotica) and popular (Populus deltoides) on their fields due to “Shisham die-back” problem. It was found that agro-forestry can be increased by providing good quality planting material to farmers and improving wood markets.

Keywords: agroforestry, trees, services, agriculture, farmers

Procedia PDF Downloads 442
3990 Assessment of Work-Related Stress and Its Predictors in Ethiopian Federal Bureau of Investigation in Addis Ababa

Authors: Zelalem Markos Borko

Abstract:

Work-related stress is a reaction that occurs when the work weight progress toward becoming excessive. Therefore, unless properly managed, stress leads to high employee turnover, decreased performance, illness and absenteeism. Yet, little has been addressed regarding work-related stress and its predictors in the study area. Therefore, the objective of this study was to assess stress prevalence and its predictors in the study area. To that effect, a cross-sectional study design was conducted on 281 employees from the Ethiopian Federal Bureau of Investigation by using stratified random sampling techniques. Survey questionnaire scales were employed to collect data. Data were analyzed by percentage, Pearson correlation coefficients, simple linear regression, multiple linear regressions, independent t-test and one-way ANOVA statistical techniques. In the present study13.9% of participants faced high stress, whereas 13.5% of participants faced low stress and the rest 72.6% of officers experienced moderate stress. There is no significant group difference among workers due to age, gender, marital status, educational level, years of service and police rank. This study concludes that factors such as role conflict, performance over-utilization, role ambiguity, and qualitative and quantitative role overload together predict 39.6% of work-related stress. This indicates that 60.4% of the variation in stress is explained by other factors, so other additional research should be done to identify additional factors predicting stress. To prevent occupational stress among police, the Ethiopian Federal Bureau of Investigation should develop strategies based on factors that will help to develop stress reduction management.

Keywords: work-related stress, Ethiopian federal bureau of investigation, predictors, Addis Ababa

Procedia PDF Downloads 59
3989 A Geochemical Perspective on A-Type Granites of Khanak and Devsar Areas, Haryana, India: Implications for Petrogenesis

Authors: Naresh Kumar, Radhika Sharma, A. K. Singh

Abstract:

Granites from Khanak and Devsar areas, a part of Malani Igneous Suite (MIS) were investigated for their geochemical characteristics to understand the petrogenetic aspect of the research area. Neoproterozoic rocks of MIS are well exposed in Jhunjhunu, Jodhpur, Pali, Barmer, Jalor, Jaisalmer districts of Rajasthan and Bhiwani district of Haryana and also occur at Kirana hills of Pakistan. The MIS predominantly consists of acidic volcanic with acidic plutonic (granite of various types), mafic volcanic, mafic intrusive and minor amount of pyroclasts. Based on the field and petrographical studies, 28 samples were selected and analyzed for geochemical analysis of major, trace and rare earth elements at the Wadia Institute of Himalayan Geology, Dehradun by X-Ray Fluorescence Spectrometer (XRF) and ICP-MS (Inductively Coupled Plasma- Mass Spectrometry). Granites from the studied areas are categorized as grey, green and pink. Khanak granites consist of quartz, k-feldspar, plagioclase, and biotite as essential minerals and hematite, zircon, annite, monazite & rutile as accessory minerals. In Devsar granites, plagioclase is replaced by perthite and occurs as dominantly. Geochemically, granites from Khanak and Devsar areas exhibit typical A-type granites characteristics with their enrichment in SiO2, Na2O+K2O, Fe/Mg, Rb, Zr, Y, Th, U, REE (except Eu) and significant depletion in MgO, CaO, Sr, P, Ti, Ni, Cr, V and Eu suggested about A-type affinities in Northwestern Peninsular India. The amount of heat production (HP) in green and grey granites of Devsar area varies upto 9.68 & 11.70 μWm-3 and total heat generation unit (HGU) i.e. 23.04 & 27.86 respectively. Pink granites of Khanak area display a higher enrichment of HP (16.53 μWm-3) and HGU (39.37) than the granites from Devsar area. Overall, they have much higher values of HP and HGU than the average value of continental crust (3.8 HGU), which imply a possible linear relationship among the surface heat flow and crustal heat generation in the rocks of MIS. Chondrite-normalized REE patterns show enriched LREE, moderate to strong negative Eu anomalies and more or less flat heavy REE. In primitive mantle-normalized multi-element variation diagrams, the granites show pronounced depletions in the high-field-strength elements (HFSE) Nb, Zr, Sr, P, and Ti. Geochemical characteristics (major, trace and REE) along with the use of various discrimination schemes revealed their probable correspondence to magma derived from the crustal origin by a different degree of partial melting.

Keywords: A-type granite, neoproterozoic, Malani igneous suite, Khanak, Devsar

Procedia PDF Downloads 267
3988 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating

Procedia PDF Downloads 187
3987 Identification and Antibiotic Resistance Rates of Acinetobacter baumannii Strains Isolated from the Respiratory Tract Samples, Obtained from the Different Intensive Care Units

Authors: Recep Kesli, Gulşah Asik, Cengiz Demir, Onur Turkyilmaz

Abstract:

Objective: Acinetobacter baumannii (A. baumannii) can cause health-care associated infections, such as bacteremia, urinary tract and wound infections, endocarditis, meningitis, and pneumonia, particularly in intensive care unit patients. In this study, we aimed to evaluate A. baumannii production in sputum and bronchoalveolar lavage and susceptibilities for antibiotics in a 24 months period. Methods: Between October 2013 and September 2015, Acinetobacter baumannii isolated from respiratory tract speciments were evaluated retrospectively. The strains were isolated from the different intensive care units patients. A. baumannii strains were identified by both the conventional methods and aoutomated identification system -VITEK 2 (bio-Merieux, Marcy l’etoile, France). Antibiotic resistance testing was performed by Kirby-Bauer disc diffusion method according to CLSI criteria. Results: All the ninety isolates included in the study were from respiratory tract specimens. While of all the isolated 90 Acinetobacter baumannii strains were found to be resistant (100%), against ceftriaxone, ceftazidime, ciprofloxacin and piperacillin/ tazobactam, resistance rates against other tested antibiotics found as follows; meropenem 77, 86%, imipenem 75, 83%, trimethoprim-sulfamethoxazole (TMP-STX) 69, 76,6%, gentamicin 51, 56,6% and amikacin 48, 53,3%. Colistin was found as the most effective antibiotic against Acinetobacter baumannii, and there were not found any resistant (0%) strain against colistin. Conclusion: This study demonstrated that the no resistance was found in Acinetobacter baumannii against to colistin. High rates of resistance to carbapenems (imipenem and meropenem) and other tested antibiotics (ceftiaxone, ceftazidime, ciprofloxacine, piperacilline-tazobactam, TMP-STX gentamicin and amikacin) also have remarkable resistance rates. There was a significant relationship between demographic features of patients such as age, undergoing mechanical ventilation, length of hospital stay with resistance rates. High resistance rates against antibiotics require implementation of the infection control program and rational use of antibiotics. In the present study, while there were not found colistin resistance, panresistance were found against to ceftriaxone, ceftazidime, ciprofloxacin and piperacillin/ tazobactam.

Keywords: acinetobacter baumannii, antibiotic resistance, multi drug resistance, intensive care unit

Procedia PDF Downloads 275
3986 A Clinical Study of Placenta Previa and Its Effect on Fetomaternal Outcome in Scarred and Unscarred Uterus at a Tertiary Care Hospital

Authors: Sharadha G., Suresh Kanakkanavar

Abstract:

Background: Placenta previa is a condition characterized by partial or complete implantation of the placenta in the lower uterine segment. It is one of the main causes of vaginal bleeding in the third trimester and a significant cause of maternal and perinatal morbidity and mortality. Materials and Methods: This is an observational study involving 130 patients diagnosed with placenta previa and satisfying inclusion criteria. The demographic data, clinical, surgical, and treatment, along with maternal and neonatal outcome parameters, were noted in proforma. Results: The incidence of placenta previa among scarred uterus was 1.32%, and in unscarred uterus was 0.67%. The mean age of the study population was 27.12±4.426years. High parity, high abortion rate, multigravida status, and less gestational age at delivery were commonly seen in scarred uterus compared to unscarred uterus. Complete placenta previa, anterior placental position, and adherent placenta were significantly associated with a scarred uterus compared to an unscarred uterus. The rate of caesarean hysterectomy was higher in the scarred uterus, along with statistical association to previous lower-segment caesarean sections. Intraoperative procedures like uterine artery ligation, bakri balloon insertion, and iliac artery ligation were higher in the scarred group. The maternal intensive care unit admission rate was higher in the scarred group and also showed its statistical association with previous lower segment caesarean section. Neonatal outcomes in terms of pre-term birth, still birth, neonatal intensive care unit admission, and neonatal death, though higher in the scarred group, did not differ statistically among the groups. Conclusion: Advancing maternal age, multiparity, prior uterine surgeries, and abortions are independent risk factors for placenta previa. Maternal morbidity is higher in the scarred uterus group compared to the unscarred group. Neonatal outcomes did not differ statistically among the groups. This knowledge would help the obstetricians to take measures to reduce the incidence of placenta previa and scarred uterus which would improve the fetomaternal outcome of placenta previa.

Keywords: placenta previa, scarred uterus, unscarred uterus, adherent placenta

Procedia PDF Downloads 46
3985 Factors That Determine International Competitiveness of Agricultural Products in Latin America 1990-2020

Authors: Oluwasefunmi Eunice Irewole, Enrique Armas Arévalos

Abstract:

Agriculture has played a crucial role in the economy and the development of many countries. Moreover, the basic needs for human survival are; food, shelter, and cloth are link on agricultural production. Most developed countries see that agriculture provides them with food and raw materials for different goods such as (shelter, medicine, fuel and clothing) which has led to an increase in incomes, livelihoods and standard of living. This study aimed at analysing the relationship between International competitiveness of agricultural products, with the area, fertilizer, labour force, economic growth, foreign direct investment, exchange rate and inflation rate in Latin America during the period of 1991-to 2019. In this study, panel data econometric methods were used, as well as cross-section dependence (Pesaran test), unit root (cross-section Augumented Dickey Fuller and Cross-sectional Im, Pesaran, and Shin tests), cointergration (Pedroni and Fisher-Johansen tests), and heterogeneous causality (Pedroni and Fisher-Johansen tests) (Hurlin and Dumitrescu test). The results reveal that the model has cross-sectional dependency and that they are integrated at one I. (1). The "fully modified OLS and dynamic OLS estimators" were used to examine the existence of a long-term relationship, and it was found that a long-term relationship existed between the selected variables. The study revealed a positive significant relationship between International Competitiveness of the agricultural raw material and area, fertilizer, labour force, economic growth, and foreign direct investment, while international competitiveness has a negative relationship with the advantages of the exchange rate and inflation. The economy policy recommendations deducted from this investigation is that Foreign Direct Investment and the labour force have a positive contribution to the increase of International Competitiveness of agricultural products.

Keywords: revealed comparative advantage, agricultural products, area, fertilizer, economic growth, granger causality, panel unit root

Procedia PDF Downloads 92
3984 Trajectory Optimization of Re-Entry Vehicle Using Evolutionary Algorithm

Authors: Muhammad Umar Kiani, Muhammad Shahbaz

Abstract:

Performance of any vehicle can be predicted by its design/modeling and optimization. Design optimization leads to efficient performance. Followed by horizontal launch, the air launch re-entry vehicle undergoes a launch maneuver by introducing a carefully selected angle of attack profile. This angle of attack profile is the basic element to complete a specified mission. Flight program of said vehicle is optimized under the constraints of the maximum allowed angle of attack, lateral and axial loads and with the objective of reaching maximum altitude. The main focus of this study is the endo-atmospheric phase of the ascent trajectory. A three degrees of freedom trajectory model is simulated in MATLAB. The optimization process uses evolutionary algorithm, because of its robustness and efficient capacity to explore the design space in search of the global optimum. Evolutionary Algorithm based trajectory optimization also offers the added benefit of being a generalized method that may work with continuous, discontinuous, linear, and non-linear performance matrix. It also eliminates the requirement of a starting solution. Optimization is particularly beneficial to achieve maximum advantage without increasing the computational cost and affecting the output of the system. For the case of launch vehicles we are immensely anxious to achieve maximum performance and efficiency under different constraints. In a launch vehicle, flight program means the prescribed variation of vehicle pitching angle during the flight which has substantial influence reachable altitude and accuracy of orbit insertion and aerodynamic loading. Results reveal that the angle of attack profile significantly affects the performance of the vehicle.

Keywords: endo-atmospheric, evolutionary algorithm, efficient performance, optimization process

Procedia PDF Downloads 399
3983 Foreign Literature at the Lessons of Individual Reading: Contemporary Methods of Phraseological Units Teaching

Authors: Diana Davletbaeva, Elena Pankratova

Abstract:

This article observes some current questions of use of foreign literature in a process of phraseological units teaching in schools. It reveals and establishes different advantages of literary read at the lessons of individual reading and gives some core points of arrangements and organizational work. The article touches upon some essential keys concerning successful phraseological units mastering and improvement of students’ knowledge in a sphere of phraseology.

Keywords: foreign languages teaching, literary read, individual reading, phraseological unit, complex of exercises

Procedia PDF Downloads 370
3982 Electricity Consumption and Economic Growth: The Case of Mexico

Authors: Mario Gómez, José Carlos Rodríguez

Abstract:

The causal relationship between energy consumption and economic growth has been an important issue in the economic literature. This paper studies the causal relationship between electricity consumption and economic growth in Mexico for the period of 1971-2011. In so doing, unit root tests and causality test are applied. The results show that the series are stationary in levels and that there is causality running from economic growth to energy consumption. The energy conservation policies have little or no impact on economic growth in México.

Keywords: causality, economic growth, energy consumption, Mexico

Procedia PDF Downloads 838
3981 The Aesthetics of Time in Thus Spoke Zarathustra: A Reappraisal of the Eternal Recurrence of the Same

Authors: Melanie Tang

Abstract:

According to Nietzsche, the eternal recurrence is his most important idea. However, it is perhaps his most cryptic and difficult to interpret. Early readings considered it as a cosmological hypothesis about the cyclic nature of time. However, following Nehamas’s ‘Life as Literature’ (1985), it has become a widespread interpretation that the eternal recurrence never really had any theoretical dimensions, and is not actually a philosophy of time, but a practical thought experiment intended to measure the extent to which we have mastered and perfected our lives. This paper endeavours to challenge this line of thought becoming scholarly consensus, and to carry out a more complex analysis of the eternal recurrence as it is presented in Thus Spoke Zarathustra. In its wider scope, this research proposes that Thus Spoke Zarathustra — as opposed to The Birth of Tragedy — be taken as the primary source for a study of Nietzsche’s Aesthetics, due to its more intrinsic aesthetic qualities and expressive devices. The eternal recurrence is the central philosophy of a work that communicates its ideas in unprecedentedly experimental and aesthetic terms, and a more in-depth understanding of why Nietzsche chooses to present his conception of time in aesthetic terms is warranted. Through hermeneutical analysis of Thus Spoke Zarathustra and engagement with secondary sources such as those by Nehamas, Karl Löwith, and Jill Marsden, the present analysis challenges the ethics of self-perfection upon which current interpretations of the recurrence are based, as well as their reliance upon a linear conception of time. Instead, it finds the recurrence to be a cyclic interplay between the self and the world, rather than a metric pertaining solely to the self. In this interpretation, time is found to be composed of an intertemporal rather than linear multitude of will to power, which structures itself through tensional cycles into an experience of circular time that can be seen to have aesthetic dimensions. In putting forth this understanding of the eternal recurrence, this research hopes to reopen debate on this key concept in the field of Nietzsche studies.

Keywords: Nietzsche, eternal recurrence, Zarathustra, aesthetics, time

Procedia PDF Downloads 141
3980 Oestrogen Replacement In Post-Oophorectomy Women

Authors: Joana Gato, Ahmed Abotabekh, Panayoti Bachkangi

Abstract:

Introduction: Oestrogen is an essential gonadal hormone that plays a vital role in the reproductive system of women1. The average age of menopause in the UK is 512. Women who go through premature menopause should be offered Hormone replacement therapy (HRT). Similarly, women who undergo surgical menopause should be offered HRT, unless contraindicated, depending on the indication of their surgery2,3. Aim: To assess if the patients in our department are counselled regarding HRT after surgical treatment and if HRT was prescribed. Methodology: A retrospective audit in a busy district hospital, examining all the patients who had a hysterectomy. The audit examined if HRT was discussed pre-operatively, prescribed on discharge and if a follow up was arranged. For women with contraindication to HRT, the audit assessed if the reasons were discussed pre-operatively and communicated to the Inclusion criteria: woman having a total or subtotal hysterectomy, with or without bilateral salpingo-ophorectomy (BSO), between April and September 2022. Exclusion criteria: woman having a vaginal hysterectomy. Results: 40 patients in total had hysterectomy; 27 (68%) were under the age of 51. 15 out of 27 patients bad BSO. 9 women were prescribed HRT, but 8 were offered HRT immediately, and 1 of them were offered a follow up. Of women who underwent surgical menopause, 7 were not given any HRT. The HRT choice was diverse, however, the majority was prescribed oral HRT. 40% of women undergoing surgical menopause did not have a discussion about HRT prior to their surgery. In postmenopausal women (n=13; 33%), still two were given HRT for preexisting menopausal symptoms. Discussion: Only 59% of the pre-menopausal patients had oophorectomy, therefore undergoing surgical menopause. Of these, 44% were not given any HRT, and 40% had no discussion about HRT prior to surgery. Interestingly, the majority of these women have no obvious contraindication to HRT. The choice of HRT was diverse, but the majority was commenced on oral HRT. Our unit is still working towards meeting all the NICE guidance standards of offering HRT and information prior to surgery to women planning to undergo surgical menopause. Conclusion: Starting HRT at the onset of menopause has been shown to improve quality of life and reduce the risk of cardiovascular disease and osteoporotic fractures4. Our unit still has scope for improvement to comply with the current NICE guidance. All pre-menopausal women undergoing surgical menopause should have a discussion regarding HRT prior to surgery and be offered it if there are no contraindications. This discussion should be clearly documented in the notes. At the time of this report, some of the patients have not yet had a follow up, which we recognize as a limitation to our audit.

Keywords: hormone replacement therapy, menopause, premature ovarian insufficiency, surgical management

Procedia PDF Downloads 94
3979 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 118
3978 Estimation of the Effect of Initial Damping Model and Hysteretic Model on Dynamic Characteristics of Structure

Authors: Shinji Ukita, Naohiro Nakamura, Yuji Miyazu

Abstract:

In considering the dynamic characteristics of structure, natural frequency and damping ratio are useful indicator. When performing dynamic design, it's necessary to select an appropriate initial damping model and hysteretic model. In the linear region, the setting of initial damping model influences the response, and in the nonlinear region, the combination of initial damping model and hysteretic model influences the response. However, the dynamic characteristics of structure in the nonlinear region remain unclear. In this paper, we studied the effect of setting of initial damping model and hysteretic model on the dynamic characteristics of structure. On initial damping model setting, Initial stiffness proportional, Tangent stiffness proportional, and Rayleigh-type were used. On hysteretic model setting, TAKEDA model and Normal-trilinear model were used. As a study method, dynamic analysis was performed using a lumped mass model of base-fixed. During analysis, the maximum acceleration of input earthquake motion was gradually increased from 1 to 600 gal. The dynamic characteristics were calculated using the ARX model. Then, the characteristics of 1st and 2nd natural frequency and 1st damping ratio were evaluated. Input earthquake motion was simulated wave that the Building Center of Japan has published. On the building model, an RC building with 30×30m planes on each floor was assumed. The story height was 3m and the maximum height was 18m. Unit weight for each floor was 1.0t/m2. The building natural period was set to 0.36sec, and the initial stiffness of each floor was calculated by assuming the 1st mode to be an inverted triangle. First, we investigated the difference of the dynamic characteristics depending on the difference of initial damping model setting. With the increase in the maximum acceleration of the input earthquake motions, the 1st and 2nd natural frequency decreased, and the 1st damping ratio increased. Then, in the natural frequency, the difference due to initial damping model setting was small, but in the damping ratio, a significant difference was observed (Initial stiffness proportional≒Rayleigh type>Tangent stiffness proportional). The acceleration and the displacement of the earthquake response were largest in the tangent stiffness proportional. In the range where the acceleration response increased, the damping ratio was constant. In the range where the acceleration response was constant, the damping ratio increased. Next, we investigated the difference of the dynamic characteristics depending on the difference of hysteretic model setting. With the increase in the maximum acceleration of the input earthquake motions, the natural frequency decreased in TAKEDA model, but in Normal-trilinear model, the natural frequency didn’t change. The damping ratio in TAKEDA model was higher than that in Normal-trilinear model, although, both in TAKEDA model and Normal-trilinear model, the damping ratio increased. In conclusion, in initial damping model setting, the tangent stiffness proportional was evaluated the most. In the hysteretic model setting, TAKEDA model was more appreciated than the Normal-trilinear model in the nonlinear region. Our results would provide useful indicator on dynamic design.

Keywords: initial damping model, damping ratio, dynamic analysis, hysteretic model, natural frequency

Procedia PDF Downloads 171
3977 Free Vibration Analysis of Timoshenko Beams at Higher Modes with Central Concentrated Mass Using Coupled Displacement Field Method

Authors: K. Meera Saheb, K. Krishna Bhaskar

Abstract:

Complex structures used in many fields of engineering are made up of simple structural elements like beams, plates etc. These structural elements, sometimes carry concentrated masses at discrete points, and when subjected to severe dynamic environment tend to vibrate with large amplitudes. The frequency amplitude relationship is very much essential in determining the response of these structural elements subjected to the dynamic loads. For Timoshenko beams, the effects of shear deformation and rotary inertia are to be considered to evaluate the fundamental linear and nonlinear frequencies. A commonly used method for solving vibration problem is energy method, or a finite element analogue of the same. In the present Coupled Displacement Field method the number of undetermined coefficients is reduced to half when compared to the famous Rayleigh Ritz method, which significantly simplifies the procedure to solve the vibration problem. This is accomplished by using a coupling equation derived from the static equilibrium of the shear flexible structural element. The prime objective of the present paper here is to study, in detail, the effect of a central concentrated mass on the large amplitude free vibrations of uniform shear flexible beams. Accurate closed form expressions for linear frequency parameter for uniform shear flexible beams with a central concentrated mass was developed and the results are presented in digital form.

Keywords: coupled displacement field, coupling equation, large amplitude vibrations, moderately thick plates

Procedia PDF Downloads 220
3976 Predicting Growth of Eucalyptus Marginata in a Mediterranean Climate Using an Individual-Based Modelling Approach

Authors: S.K. Bhandari, E. Veneklaas, L. McCaw, R. Mazanec, K. Whitford, M. Renton

Abstract:

Eucalyptus marginata, E. diversicolor and Corymbia calophylla form widespread forests in south-west Western Australia (SWWA). These forests have economic and ecological importance, and therefore, tree growth and sustainable management are of high priority. This paper aimed to analyse and model the growth of these species at both stand and individual levels, but this presentation will focus on predicting the growth of E. Marginata at the individual tree level. More specifically, the study wanted to investigate how well individual E. marginata tree growth could be predicted by considering the diameter and height of the tree at the start of the growth period, and whether this prediction could be improved by also accounting for the competition from neighbouring trees in different ways. The study also wanted to investigate how many neighbouring trees or what neighbourhood distance needed to be considered when accounting for competition. To achieve this aim, the Pearson correlation coefficient was examined among competition indices (CIs), between CIs and dbh growth, and selected the competition index that can best predict the diameter growth of individual trees of E. marginata forest managed under different thinning regimes at Inglehope in SWWA. Furthermore, individual tree growth models were developed using simple linear regression, multiple linear regression, and linear mixed effect modelling approaches. Individual tree growth models were developed for thinned and unthinned stand separately. The developed models were validated using two approaches. In the first approach, models were validated using a subset of data that was not used in model fitting. In the second approach, the model of the one growth period was validated with the data of another growth period. Tree size (diameter and height) was a significant predictor of growth. This prediction was improved when the competition was included in the model. The fit statistic (coefficient of determination) of the model ranged from 0.31 to 0.68. The model with spatial competition indices validated as being more accurate than with non-spatial indices. The model prediction can be optimized if 10 to 15 competitors (by number) or competitors within ~10 m (by distance) from the base of the subject tree are included in the model, which can reduce the time and cost of collecting the information about the competitors. As competition from neighbours was a significant predictor with a negative effect on growth, it is recommended including neighbourhood competition when predicting growth and considering thinning treatments to minimize the effect of competition on growth. These model approaches are likely to be useful tools for the conservations and sustainable management of forests of E. marginata in SWWA. As a next step in optimizing the number and distance of competitors, further studies in larger size plots and with a larger number of plots than those used in the present study are recommended.

Keywords: competition, growth, model, thinning

Procedia PDF Downloads 116