Search results for: equivalent circuit models
6017 Basic Modal Displacements (BMD) for Optimizing the Buildings Subjected to Earthquakes
Authors: Seyed Sadegh Naseralavi, Mohsen Khatibinia
Abstract:
In structural optimizations through meta-heuristic algorithms, analyses of structures are performed for many times. For this reason, performing the analyses in a time saving way is precious. The importance of the point is more accentuated in time-history analyses which take much time. To this aim, peak picking methods also known as spectrum analyses are generally utilized. However, such methods do not have the required accuracy either done by square root of sum of squares (SRSS) or complete quadratic combination (CQC) rules. The paper presents an efficient technique for evaluating the dynamic responses during the optimization process with high speed and accuracy. In the method, first by using a static equivalent of the earthquake, an initial design is obtained. Then, the displacements in the modal coordinates are achieved. The displacements are herein called basic modal displacements (MBD). For each new design of the structure, the responses can be derived by well scaling each of the MBD along the time and amplitude and superposing them together using the corresponding modal matrices. To illustrate the efficiency of the method, an optimization problems is studied. The results show that the proposed approach is a suitable replacement for the conventional time history and spectrum analyses in such problems.Keywords: basic modal displacements, earthquake, optimization, spectrum
Procedia PDF Downloads 3616016 DISGAN: Efficient Generative Adversarial Network-Based Method for Cyber-Intrusion Detection
Authors: Hongyu Chen, Li Jiang
Abstract:
Ubiquitous anomalies endanger the security of our system con- stantly. They may bring irreversible damages to the system and cause leakage of privacy. Thus, it is of vital importance to promptly detect these anomalies. Traditional supervised methods such as Decision Trees and Support Vector Machine (SVM) are used to classify normality and abnormality. However, in some case, the abnormal status are largely rarer than normal status, which leads to decision bias of these methods. Generative adversarial network (GAN) has been proposed to handle the case. With its strong generative ability, it only needs to learn the distribution of normal status, and identify the abnormal status through the gap between it and the learned distribution. Nevertheless, existing GAN-based models are not suitable to process data with discrete values, leading to immense degradation of detection performance. To cope with the discrete features, in this paper, we propose an efficient GAN-based model with specifically-designed loss function. Experiment results show that our model outperforms state-of-the-art models on discrete dataset and remarkably reduce the overhead.Keywords: GAN, discrete feature, Wasserstein distance, multiple intermediate layers
Procedia PDF Downloads 1296015 Micromechanical Modelling of Ductile Damage with a Cohesive-Volumetric Approach
Authors: Noe Brice Nkoumbou Kaptchouang, Pierre-Guy Vincent, Yann Monerie
Abstract:
The present work addresses the modelling and the simulation of crack initiation and propagation in ductile materials which failed by void nucleation, growth, and coalescence. One of the current research frameworks on crack propagation is the use of cohesive-volumetric approach where the crack growth is modelled as a decohesion of two surfaces in a continuum material. In this framework, the material behavior is characterized by two constitutive relations, the volumetric constitutive law relating stress and strain, and a traction-separation law across a two-dimensional surface embedded in the three-dimensional continuum. Several cohesive models have been proposed for the simulation of crack growth in brittle materials. On the other hand, the application of cohesive models in modelling crack growth in ductile material is still a relatively open field. One idea developed in the literature is to identify the traction separation for ductile material based on the behavior of a continuously-deforming unit cell failing by void growth and coalescence. Following this method, the present study proposed a semi-analytical cohesive model for ductile material based on a micromechanical approach. The strain localization band prior to ductile failure is modelled as a cohesive band, and the Gurson-Tvergaard-Needleman plasticity model (GTN) is used to model the behavior of the cohesive band and derived a corresponding traction separation law. The numerical implementation of the model is realized using the non-smooth contact method (NSCD) where cohesive models are introduced as mixed boundary conditions between each volumetric finite element. The present approach is applied to the simulation of crack growth in nuclear ferritic steel. The model provides an alternative way to simulate crack propagation using the numerical efficiency of cohesive model with a traction separation law directly derived from porous continuous model.Keywords: ductile failure, cohesive model, GTN model, numerical simulation
Procedia PDF Downloads 1496014 High-Quality Flavor of Black Belly Pork under Lightning Corona Discharge Using Tesla Coil for High Voltage Education
Authors: Kyung-Hoon Jang, Jae-Hyo Park, Kwang-Yeop Jang, Dongjin Kim
Abstract:
The Tesla coil is an electrical resonant transformer circuit designed by inventor Nikola Tesla in 1891. It is used to produce high voltage, low current and high frequency alternating current electricity. Tesla experimented with a number of different configurations consisting of two or sometimes three coupled resonant electric circuits. This paper focuses on development and high voltage education to apply a Tesla coil to cuisine for high quality flavor and taste conditioning as well as high voltage education under 50 kV corona discharge. The result revealed that the velocity of roasted black belly pork by Tesla coil is faster than that of conventional methods such as hot grill and steel plate etc. depending on applied voltage level and applied voltage time. Besides, carbohydrate and crude protein increased, whereas natrium and saccharides significantly decreased after lightning surge by Tesla coil. This idea will be useful in high voltage education and high voltage application.Keywords: corona discharge, Tesla coil, high voltage application, high voltage education
Procedia PDF Downloads 3286013 Natural Radioactivity in Foods Consumed in Turkey
Authors: E. Kam, G. Karahan, H. Aslıyuksek, A. Bozkurt
Abstract:
This study aims to determine the natural radioactivity levels in some foodstuffs produced in Turkey. For this purpose, 48 different foods samples were collected from different land parcels throughout the country. All samples were analyzed to designate both gross alpha and gross beta radioactivities and the radionuclides’ concentrations. The gross alpha radioactivities were measured as below 1 Bq kg-1 in most of the samples, some of them being due to the detection limit of the counting system. The gross beta radioactivity levels ranged from 1.8 Bq kg-1 to 453 Bq kg-1, larger levels being observed in leguminous seeds while the highest level being in haricot bean. The concentrations of natural radionuclides in the foodstuffs were investigated by the method of gamma spectroscopy. High levels of 40K were measured in all the samples, the highest activities being again in leguminous seeds. Low concentrations of 238U and 226Ra were found in some of the samples, which are comparable to the reported results in the literature. Based on the activity concentrations obtained in this study, average annual effective dose equivalents for the radionuclides 226Ra, 238U, and 40K were calculated as 77.416 µSv y-1, 0.978 µSv y-1, and 140.55 µSv y-1, respectively.Keywords: foods, radioactivity, gross alpha, gross beta, annual equivalent dose, Turkey
Procedia PDF Downloads 4546012 Considering Climate Change in Food Security: A Sociological Study Investigating the Modern Agricultural Practices and Food Security in Bangladesh
Authors: Hosen Tilat Mahal, Monir Hossain
Abstract:
Despite being a food-sufficient country after revolutionary changes in agricultural inputs, Bangladesh still has food insecurity and undernutrition. This study examines the association between agricultural practices (as social practices) and food security concentrating on the potential impact of sociodemographic factors and climate change. Using data from the 2012 Bangladesh Integrated Household Survey (BIHS), this study shows how modifiedagricultural practices are strongly associated with climate change and different sociodemographic factors (land ownership, religion, gender, education, and occupation) subsequently affect the status of food security in Bangladesh. We used linear and logistic regression models to analyze the association between modified agricultural practices and food security. The findings indicate that socioeconomic statuses are significant predictors of determining agricultural practices in a society like Bangladesh and control food security at the household level. Moreover, climate change is adversely impactingeven the modified agricultural and food security association version. We conclude that agricultural practices must consider climate change while boosting food security. Therefore, future research should integrate climate change into the agriculture and food-related mitigation and resiliency models.Keywords: food security, agricultural productivity, climate change, bangladesh
Procedia PDF Downloads 1236011 Systematic Study of Structure Property Relationship in Highly Crosslinked Elastomers
Authors: Natarajan Ramasamy, Gurulingamurthy Haralur, Ramesh Nivarthu, Nikhil Kumar Singha
Abstract:
Elastomers are polymeric materials with varied backbone architectures ranging from linear to dendrimeric structures and wide varieties of monomeric repeat units. These elastomers show strongly viscous and weakly elastic when it is not cross-linked. But when crosslinked, based on the extent the properties of these elastomers can range from highly flexible to highly stiff nature. Lightly cross-linked systems are well studied and reported. Understanding the nature of highly cross-linked rubber based upon chemical structure and architecture is critical for varieties of applications. One of the critical parameters is cross-link density. In the current work, we have studied the highly cross-linked state of linear, lightly branched to star-shaped branched elastomers and determined the cross-linked density by using different models. Change in hardness, shift in Tg, change in modulus and swelling behavior were measured experimentally as a function of the extent of curing. These properties were analyzed using varied models to determine cross-link density. We used hardness measurements to examine cure time. Hardness to the extent of curing relationship is determined. It is well known that micromechanical transitions like Tg and storage modulus are related to the extent of crosslinking. The Tg of the elastomer in different crosslinked state was determined by DMA, and based on plateau modulus the crosslink density is estimated by using Nielsen’s model. Usually for lightly crosslinked systems, based on equilibrium swelling ratio in solvent the cross link density is estimated by using Flory–Rhener model. When it comes to highly crosslinked system, Flory-Rhener model is not valid because of smaller chain length. So models based on the assumption of polymer as a Non-Gaussian chain like 1) Helmis–Heinrich–Straube (HHS) model, 2) Gloria M.gusler and Yoram Cohen Model, 3) Barbara D. Barr-Howell and Nikolaos A. Peppas model is used for estimating crosslink density. In this work, correction factors are determined to the existing models and based upon it structure-property relationship of highly crosslinked elastomers was studied.Keywords: dynamic mechanical analysis, glass transition temperature, parts per hundred grams of rubber, crosslink density, number of networks per unit volume of elastomer
Procedia PDF Downloads 1656010 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances
Authors: P. Mounnarath, U. Schmitz, Ch. Zhang
Abstract:
Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis
Procedia PDF Downloads 4356009 Predictive Maintenance of Electrical Induction Motors Using Machine Learning
Authors: Muhammad Bilal, Adil Ahmed
Abstract:
This study proposes an approach for electrical induction motor predictive maintenance utilizing machine learning algorithms. On the basis of a study of temperature data obtained from sensors put on the motor, the goal is to predict motor failures. The proposed models are trained to identify whether a motor is defective or not by utilizing machine learning algorithms like Support Vector Machines (SVM) and K-Nearest Neighbors (KNN). According to a thorough study of the literature, earlier research has used motor current signature analysis (MCSA) and vibration data to forecast motor failures. The temperature signal methodology, which has clear advantages over the conventional MCSA and vibration analysis methods in terms of cost-effectiveness, is the main subject of this research. The acquired results emphasize the applicability and effectiveness of the temperature-based predictive maintenance strategy by demonstrating the successful categorization of defective motors using the suggested machine learning models.Keywords: predictive maintenance, electrical induction motors, machine learning, temperature signal methodology, motor failures
Procedia PDF Downloads 1186008 Corrosion Characterization of Al6061, Quartz Metal Matrix Composites in Alkali Medium
Authors: Radha H. R., Krupakara P. V.
Abstract:
Metal matrix composites are attracting today's manufacturers of many automobile parts so that they lost longer and their properties can be tailored according to the requirement. In this paper an attempt has been made to study the corrosion characteristics of Aluminium 6061 / quartz metal matrix composites in alkali medium like sodium hydroxide solutions. Metal matrix composites are heterogeneous mixtures of a matrix and reinforcement. In this work the matrix selected is Aluminium 6061 alloy which is commercially available and the reinforcement selected is quartz particulates of 50-80 micron size which is available in plenty in and around Bangalore district, India. Composites containing Aluminium 6061 with 2, 4 and 6 weight percent of quartz are manufactured by liquid melt metallurgy technique using vortex method. Corrosion tests like static weight loss and open circuit potential tests are conducted in different concentrated solutions of sodium hydroxide. To compare the results the matrix Aluminium 6061 is also casted in the same way. Specimens for the test are prepared according to ASTM standards. In all the tests the metal matrix composites showed better corrosion resistance than matrix alloy.Keywords: aluminium 6061, corrosion, quartz, vortex
Procedia PDF Downloads 4096007 Optimal Economic Restructuring Aimed at an Optimal Increase in GDP Constrained by a Decrease in Energy Consumption and CO2 Emissions
Authors: Alexander Vaninsky
Abstract:
The objective of this paper is finding the way of economic restructuring - that is, change in the shares of sectoral gross outputs - resulting in the maximum possible increase in the gross domestic product (GDP) combined with decreases in energy consumption and CO2 emissions. It uses an input-output model for the GDP and factorial models for the energy consumption and CO2 emissions to determine the projection of the gradient of GDP, and the antigradients of the energy consumption and CO2 emissions, respectively, on a subspace formed by the structure-related variables. Since the gradient (antigradient) provides a direction of the steepest increase (decrease) of the objective function, and their projections retain this property for the functions' limitation to the subspace, each of the three directional vectors solves a particular problem of optimal structural change. In the next step, a type of factor analysis is applied to find a convex combination of the projected gradient and antigradients having maximal possible positive correlation with each of the three. This convex combination provides the desired direction of the structural change. The national economy of the United States is used as an example of applications.Keywords: economic restructuring, input-output analysis, divisia index, factorial decomposition, E3 models
Procedia PDF Downloads 3146006 Adaptive Backstepping Control of Uncertain Nonlinear Systems with Input Backlash
Authors: Ali Anwar, Hu Qinglei, Li Bo, Muhammad Taha Ali
Abstract:
In this paper a generic model of perturbed nonlinear systems is considered which is affected by hard backlash nonlinearity at the input. The nonlinearity is modelled by a dynamic differential equation which presents a more precise shape as compared to the existing linear models and is compatible with nonlinear design technique such as backstepping. Moreover, a novel backstepping based nonlinear control law is designed which explicitly incorporates a continuous-time adaptive backlash inverse model. It provides a significant flexibility to control engineers, whereby they can use the estimated backlash spacing value specified on actuators such as gears etc. in the adaptive Backlash Inverse model during the control design. It ensures not only global stability but also stringent transient performance with desired precision. It is also robust to external disturbances upon which the bounds are taken as unknown and traverses the backlash spacing efficiently with underestimated information about the actual value. The continuous-time backlash inverse model is distinguished in the sense that other models are either discrete-time or involve complex computations. Furthermore, numerical simulations are presented which not only illustrate the effectiveness of proposed control law but also its comparison with PID and other backstepping controllers.Keywords: adaptive control, hysteresis, backlash inverse, nonlinear system, robust control, backstepping
Procedia PDF Downloads 4616005 Establishing a Surrogate Approach to Assess the Exposure Concentrations during Coating Process
Authors: Shan-Hong Ying, Ying-Fang Wang
Abstract:
A surrogate approach was deployed for assessing exposures of multiple chemicals at the selected working area of coating processes and applied to assess the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. For the selected area, 6 to 12 portable photoionization detector (PID) were placed uniformly in its workplace to measure its total VOCs concentrations (CT-VOCs) for 6 randomly selected workshifts. Simultaneously, one sampling strain was placed beside one of these portable PIDs, and the collected air sample was analyzed for individual concentration (CVOCi) of 5 VOCs (xylene, butanone, toluene, butyl acetate, and dimethylformamide). Predictive models were established by relating the CT-VOCs to CVOCi of each individual compound via simple regression analysis. The established predictive models were employed to predict each CVOCi based on the measured CT-VOC for each the similar working area using the same portable PID. Results show that predictive models obtained from simple linear regression analyses were found with an R2 = 0.83~0.99 indicating that CT-VOCs were adequate for predicting CVOCi. In order to verify the validity of the exposure prediction model, the sampling analysis of the above chemical substances was further carried out and the correlation between the measured value (Cm) and the predicted value (Cp) was analyzed. It was found that there is a good correction between the predicted value and measured value of each measured chemical substance (R2=0.83~0.98). Therefore, the surrogate approach could be assessed the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. However, it is recommended to establish the prediction model between the chemical substances belonging to each coater and the direct-reading PID, which is more representative of reality exposure situation and more accurately to estimate the long-term exposure concentration of operators.Keywords: exposure assessment, exposure prediction model, surrogate approach, TVOC
Procedia PDF Downloads 1506004 Discrimination in Insurance Pricing: A Textual-Analysis Perspective
Authors: Ruijuan Bi
Abstract:
Discrimination in insurance pricing is a topic of increasing concern, particularly in the context of the rapid development of big data and artificial intelligence. There is a need to explore the various forms of discrimination, such as direct and indirect discrimination, proxy discrimination, algorithmic discrimination, and unfair discrimination, and understand their implications in insurance pricing models. This paper aims to analyze and interpret the definitions of discrimination in insurance pricing and explore measures to reduce discrimination. It utilizes a textual analysis methodology, which involves gathering qualitative data from relevant literature on definitions of discrimination. The research methodology focuses on exploring the various forms of discrimination and their implications in insurance pricing models. Through textual analysis, this paper identifies the specific characteristics and implications of each form of discrimination in the general insurance industry. This research contributes to the theoretical understanding of discrimination in insurance pricing. By analyzing and interpreting relevant literature, this paper provides insights into the definitions of discrimination and the laws and regulations surrounding it. This theoretical foundation can inform future empirical research on discrimination in insurance pricing using relevant theories of probability theory.Keywords: algorithmic discrimination, direct and indirect discrimination, proxy discrimination, unfair discrimination, insurance pricing
Procedia PDF Downloads 736003 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers
Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver
Abstract:
Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN
Procedia PDF Downloads 736002 Optimization-Based Design Improvement of Synchronizer in Transmission System for Efficient Vehicle Performance
Authors: Sanyka Banerjee, Saikat Nandi, P. K. Dan
Abstract:
Synchronizers as an integral part of gearbox is a key element in the transmission system in automotive. The performance of synchronizer affects transmission efficiency and driving comfort. Synchronizing mechanism as a major component of transmission system must be capable of preventing vibration and noise in the gears. Gear shifting efficiency improvement with an aim to achieve smooth, quick and energy efficient power transmission remains a challenge for the automotive industry. Performance of the synchronizer is dependent on the features and characteristics of its sub-components and therefore analysis of the contribution of such characteristics is necessary. An important exercise involved is to identify all such characteristics or factors which are associated with the modeling and analysis and for this purpose the literature was reviewed, rather extensively, to study the mathematical models, formulated considering such. It has been observed that certain factors are rather common across models; however, there are few factors which have specifically been selected for individual models, as reported. In order to obtain a more realistic model, an attempt here has been made to identify and assimilate practically all possible factors which may be considered in formulating the model more comprehensively. A simulation study, formulated as a block model, for such analysis has been carried out in a reliable environment like MATLAB. Lower synchronization time is desirable and hence, it has been considered here as the output factors in the simulation modeling for evaluating transmission efficiency. An improved synchronizer model requires optimized values of sub-component design parameters. A parametric optimization utilizing Taguchi’s design of experiment based response data and their analysis has been carried out for this purpose. The effectiveness of the optimized parameters for the improved synchronizer performance has been validated by the simulation study of the synchronizer block model with improved parameter values as input parameters for better transmission efficiency and driver comfort.Keywords: design of experiments, modeling, parametric optimization, simulation, synchronizer
Procedia PDF Downloads 3126001 Virtual Routing Function Allocation Method for Minimizing Total Network Power Consumption
Authors: Kenichiro Hida, Shin-Ichi Kuribayashi
Abstract:
In a conventional network, most network devices, such as routers, are dedicated devices that do not have much variation in capacity. In recent years, a new concept of network functions virtualisation (NFV) has come into use. The intention is to implement a variety of network functions with software on general-purpose servers and this allows the network operator to select their capacities and locations without any constraints. This paper focuses on the allocation of NFV-based routing functions which are one of critical network functions, and presents the virtual routing function allocation algorithm that minimizes the total power consumption. In addition, this study presents the useful allocation policy of virtual routing functions, based on an evaluation with a ladder-shaped network model. This policy takes the ratio of the power consumption of a routing function to that of a circuit and traffic distribution between areas into consideration. Furthermore, the present paper shows that there are cases where the use of NFV-based routing functions makes it possible to reduce the total power consumption dramatically, in comparison to a conventional network, in which it is not economically viable to distribute small-capacity routing functions.Keywords: NFV, resource allocation, virtual routing function, minimum power consumption
Procedia PDF Downloads 3426000 Characterization of Vegetable Wastes and Its Potential Use for Hydrogen and Methane Production via Dark Anaerobic Fermentation
Authors: Ajay Dwivedi, M. Suresh Kumar, A. N. Vaidya
Abstract:
The problem of fruit and vegetable waste management is a grave one and with ever increasing need to feed the exponentially growing population, more and more solid waste in the form of fruit and vegetables waste are generated and its management has become one of the key issues in protection of environment. Energy generation from fruit and vegetables waste by dark anaerobic fermentation is a recent an interesting avenue effective management of solid waste as well as for generating free and cheap energy. In the present study 17 vegetables were characterized for their physical as well as chemical properties, these characteristics were used to determine the hydrogen and methane potentials of vegetable from various models, and also lab scale batch experiments were performed to determine their actual hydrogen and methane production capacity. Lab scale batch experiments proved that vegetable waste can be used as effective substrate for bio hydrogen and methane production, however the expected yield of bio hydrogen and methane was much lower than predicted by models, this was due to the fact that other vital experimental parameters such as pH, total solids content, food to microorganism ratio was not optimized.Keywords: vegetable waste, physico-chemical characteristics, hydrogen, methane
Procedia PDF Downloads 4285999 A Study on the Iterative Scheme for Stratified Shields Gamma Ray Buildup Factors Using Layer-Splitting Technique in Double-Layer Shields
Authors: Sari F. Alkhatib, Chang Je Park, Gyuhong Roh
Abstract:
The iterative scheme which is used to treat buildup factors for stratified shields is being investigated here using the layer-splitting technique. A simple suggested formalism for the scheme based on the Kalos’ formula is introduced, based on which the implementation of the testing technique is carried out. The second layer in a double-layer shield was split into two equivalent layers and the scheme (with the suggested formalism) was implemented on the new “three-layer” shield configuration. The results of such manipulation on water-lead and water-iron shields combinations are presented here for 1 MeV photons. It was found that splitting the second layer introduces some deviation on the overall buildup factor value. This expected deviation appeared to be higher in the case of low Z layer followed by high Z. However, the overall performance of the iterative scheme showed a great consistency and strong coherence even with the introduced changes. The introduced layer-splitting testing technique shows the capability to be implemented in test the iterative scheme with a wide range of formalisms.Keywords: buildup factor, iterative scheme, stratified shields, layer-splitting tecnique
Procedia PDF Downloads 4165998 Climate Change Effects in a Mediterranean Island and Streamflow Changes for a Small Basin Using Euro-Cordex Regional Climate Simulations Combined with the SWAT Model
Authors: Pier Andrea Marras, Daniela Lima, Pedro Matos Soares, Rita Maria Cardoso, Daniela Medas, Elisabetta Dore, Giovanni De Giudici
Abstract:
Climate change effects on the hydrologic cycle are the main concern for the evaluation of water management strategies. Climate models project scenarios of precipitation changes in the future, considering greenhouse emissions. In this study, the EURO-CORDEX (European Coordinated Regional Downscaling Experiment) climate models were first evaluated in a Mediterranean island (Sardinia) against observed precipitation for a historical reference period (1976-2005). A weighted multi-model ensemble (ENS) was built, weighting the single models based on their ability to reproduce observed rainfall. Future projections (2071-2100) were carried out using the 8.5 RCP emissions scenario to evaluate changes in precipitations. ENS was then used as climate forcing for the SWAT model (Soil and Water Assessment Tool), with the aim to assess the consequences of such projected changes on streamflow and runoff of two small catchments located in the South-West Sardinia. Results showed that a decrease of mean rainfall values, up to -25 % at yearly scale, is expected for the future, along with an increase of extreme precipitation events. Particularly in the eastern and southern areas, extreme events are projected to increase by 30%. Such changes reflect on the hydrologic cycle with a decrease of mean streamflow and runoff, except in spring, when runoff is projected to increase by 20-30%. These results stress that the Mediterranean is a hotspot for climate change, and the use of model tools can provide very useful information to adopt water and land management strategies to deal with such changes.Keywords: EURO-CORDEX, climate change, hydrology, SWAT model, Sardinia, multi-model ensemble
Procedia PDF Downloads 2145997 Knowledge Management Best Practice Model in Higher Learning Institution: A Systematic Literature Review
Authors: Ismail Halijah, Abdullah Rusli
Abstract:
Introduction: This systematic literature review aims to identify the Knowledge Management Best Practice components in the Knowledge Management Model for Higher Learning Institutions environment. Study design: Systematic literature review. Methods: A systematic literature re-view of Knowledge Management Best Practice to identify and define the components of Best Practice from the Knowledge Management models was conducted recently. Results: This review of published papers of conference and journals’ articles shows the components of Best Practice in Knowledge Management are basically divided into two aspect which is the soft aspect and the hard aspect. The lacks of combination of these two aspects into an integrated model decelerate Knowledge Management Best Practice to fully throttle. Evidence from the literature shows the lack of integration of this two aspects leads to the immaturity of the Higher Learning Institution (HLI) towards the implementation of Knowledge Management System. Conclusion: The first steps of identifying the attributes to measure the Knowledge Management Best Practice components from the models in the literature will led to the definition of the Knowledge Management Best Practice component for the higher learning environment.Keywords: knowledge management, knowledge management system, knowledge management best practice, knowledge management higher learning institution
Procedia PDF Downloads 5925996 Modeling the Effects of Temperature on Air Pollutant Concentration
Authors: Mustapha Babatunde, Bassam Tawabini, Ole John Nielson
Abstract:
Air dispersion (AD) models such as AERMOD are important tools for estimating the environmental impacts of air pollutant emissions into the atmosphere from anthropogenic sources. The outcome of these models is significantly linked to the climate condition like air temperature, which is expected to differ in the future due to the global warming phenomenon. With projections from scientific sources of impending changes to the future climate of Saudi Arabia, especially anticipated temperature rise, there is a potential direct impact on the dispersion patterns of air pollutants results from AD models. To our knowledge, no similar studies were carried out in Saudi Arabia to investigate such impact. Therefore, this research investigates the effects of climate temperature change on air quality in the Dammam Metropolitan area, Saudi Arabia, using AERMOD coupled with Station data using Sulphur dioxide (SO2) – as a model air pollutant. The research uses AERMOD model to predict the SO2 dispersion trends on the surrounding area. Emissions from five (5) industrial stacks, on twenty-eight (28) receptors in the study area were considered for the climate period (2010-2019) and future period of mid-century (2040-2060) under different scenarios of elevated temperature profiles (+1oC, + 3oC and + 5oC) across averaging time periods of 1hr, 4hr and 8hr. Results showed that levels of SO2 at the receiving sites under current and simulated future climactic condition fall within the allowable limit of WHO and KSA air quality standards. Results also revealed that the projected rise in temperature would only have mild increment on the SO2 concentration levels. The average increase of SO2 levels were 0.04%, 0.14%, and 0.23% due to the temperature increase of 1, 3, and 5 degrees respectively. In conclusion, the outcome of this work elucidates the degree of the effects of global warming and climate changes phenomena on air quality and can help the policymakers in their decision-making, given the significant health challenges associated with ambient air pollution in Saudi Arabia.Keywords: air quality, sulphur dioxide, global warming, air dispersion model
Procedia PDF Downloads 1315995 Molecular Dynamics Simulation of Realistic Biochar Models with Controlled Microporosity
Authors: Audrey Ngambia, Ondrej Masek, Valentina Erastova
Abstract:
Biochar is an amorphous carbon-rich material generated from the pyrolysis of biomass with multifarious properties and functionality. Biochar has shown proven applications in the treatment of flue gas and organic and inorganic pollutants in soil and water/wastewater as a result of its multiple surface functional groups and porous structures. These properties have also shown potential in energy storage and carbon capture. The availability of diverse sources of biomass to produce biochar has increased interest in it as a sustainable and environmentally friendly material. The properties and porous structures of biochar vary depending on the type of biomass and high heat treatment temperature (HHT). Biochars produced at HHT between 400°C – 800°C generally have lower H/C and O/C ratios, higher porosities, larger pore sizes and higher surface areas with temperature. While all is known experimentally, there is little knowledge on the porous role structure and functional groups play on processes occurring at the atomistic scale, which are extremely important for the optimization of biochar for application, especially in the adsorption of gases. Atomistic simulations methods have shown the potential to generate such amorphous materials; however, most of the models available are composed of only carbon atoms or graphitic sheets, which are very dense or with simple slit pores, all of which ignore the important role of heteroatoms such as O, N, S and pore morphologies. Hence, developing realistic models that integrate these parameters are important to understand their role in governing adsorption mechanisms that will aid in guiding the design and optimization of biochar materials for target applications. In this work, molecular dynamics simulations in the isobaric ensemble are used to generate realistic biochar models taking into account experimentally determined H/C, O/C, N/C, aromaticity, micropore size range, micropore volumes and true densities of biochars. A pore generation approach was developed using virtual atoms, which is a Lennard-Jones sphere of varying van der Waals radius and softness. Its interaction via a soft-core potential with the biochar matrix allows the creation of pores with rough surfaces while varying the van der Waals radius parameters gives control to the pore-size distribution. We focused on microporosity, creating average pore sizes of 0.5 - 2 nm in diameter and pore volumes in the range of 0.05 – 1 cm3/g, which corresponds to experimental gas adsorption micropore sizes of amorphous porous biochars. Realistic biochar models with surface functionalities, micropore size distribution and pore morphologies were developed, and they could aid in the study of adsorption processes in confined micropores.Keywords: biochar, heteroatoms, micropore size, molecular dynamics simulations, surface functional groups, virtual atoms
Procedia PDF Downloads 715994 Prospectivity Mapping of Orogenic Lode Gold Deposits Using Fuzzy Models: A Case Study of Saqqez Area, Northwestern Iran
Authors: Fanous Mohammadi, Majid H. Tangestani, Mohammad H. Tayebi
Abstract:
This research aims to evaluate and compare Geographical Information Systems (GIS)-based fuzzy models for producing orogenic gold prospectivity maps in the Saqqez area, NW of Iran. Gold occurrences are hosted in sericite schist and mafic to felsic meta-volcanic rocks in this area and are associated with hydrothermal alterations that extend over ductile to brittle shear zones. The predictor maps, which represent the Pre-(Source/Trigger/Pathway), syn-(deposition/physical/chemical traps) and post-mineralization (preservation/distribution of indicator minerals) subsystems for gold mineralization, were generated using empirical understandings of the specifications of known orogenic gold deposits and gold mineral systems and were then pre-processed and integrated to produce mineral prospectivity maps. Five fuzzy logic operators, including AND, OR, Fuzzy Algebraic Product (FAP), Fuzzy Algebraic Sum (FAS), and GAMMA, were applied to the predictor maps in order to find the most efficient prediction model. Prediction-Area (P-A) plots and field observations were used to assess and evaluate the accuracy of prediction models. Mineral prospectivity maps generated by AND, OR, FAP, and FAS operators were inaccurate and, therefore, unable to pinpoint the exact location of discovered gold occurrences. The GAMMA operator, on the other hand, produced acceptable results and identified potentially economic target sites. The P-A plot revealed that 68 percent of known orogenic gold deposits are found in high and very high potential regions. The GAMMA operator was shown to be useful in predicting and defining cost-effective target sites for orogenic gold deposits, as well as optimizing mineral deposit exploitation.Keywords: mineral prospectivity mapping, fuzzy logic, GIS, orogenic gold deposit, Saqqez, Iran
Procedia PDF Downloads 1215993 Regulated Output Voltage Double Switch Buck-Boost Converter for Photovoltaic Energy Application
Authors: M. Kaouane, A. Boukhelifa, A. Cheriti
Abstract:
In this paper, a new Buck-Boost DC-DC converter is designed and simulated for photovoltaic energy system. The presented Buck-Boost converter has a double switch. Moreover, its output voltage is regulated to a constant value whatever its input is. In the presented work, the Buck-Boost transfers the produced energy from the photovoltaic generator to an R-L load. The converter is controlled by the pulse width modulation technique in a way to have a suitable output voltage, in the other hand, to carry the generator’s power, and put it close to the maximum possible power that can be generated by introducing the right duty cycle of the pulse width modulation signals that control the switches of the converter; each component and each parameter of the proposed circuit is well calculated using the equations that describe each operating mode of the converter. The proposed configuration of Buck-Boost converter has been simulated in Matlab/Simulink environment; the simulation results show that it is a good choice to take in order to maintain the output voltage constant while ensuring a good energy transfer.Keywords: Buck-Boost converter, switch, photovoltaic, PWM, power, energy transfer
Procedia PDF Downloads 9055992 Mutation of Galp Improved Fermentation of Mixed Sugars to Succinate Using Engineered Escherichia coli As1600a
Authors: Apichai Sawisit, Sirima Suvarnakuta Jantama, Sunthorn Kanchanatawee, Lonnie O. Ingram, Kaemwich Jantama
Abstract:
Escherichia coli KJ122 was engineered to produce succinate from glucose using the wild type GalP for glucose uptake instead of the native phosphotransferase system (ptsI mutation). This strain ferments 10% (w/v) xylose poorly. Mutants were selected by serial transfers in AM1 mineral salts medium with 10% (w/v) xylose. Evolved mutants exhibited a similar improvement, co-fermentation of an equal mixture of xylose and glucose. One of these, AS1600a, produced 84.26±1.37 g/L succinate, equivalent to that produced by the parent (KJ122) strain from 10% glucose (85.46±1.78 g/L). AS1600a was sequenced and found to contain a mutation in galactose permease (GalP, G236D). Expressing the galP* mutation gene in KJ122ΔgalP resembled the xylose utilization phenotype of the mutant AS1600a. The strain AS1600a and KJ122ΔgalP (pLOI5746; galP*) also co-fermented a mixture of glucose, xylose, arabinose, and galactose in sugarcane bagasse hydrolysate for succinate production.Keywords: xylose, furfural, succinat, sugarcane bagasse, E. coli
Procedia PDF Downloads 4505991 Quantum Dots Incorporated in Biomembrane Models for Cancer Marker
Authors: Thiago E. Goto, Carla C. Lopes, Helena B. Nader, Anielle C. A. Silva, Noelio O. Dantas, José R. Siqueira Jr., Luciano Caseli
Abstract:
Quantum dots (QD) are semiconductor nanocrystals that can be employed in biological research as a tool for fluorescence imagings, having the potential to expand in vivo and in vitro analysis as cancerous cell biomarkers. Particularly, cadmium selenide (CdSe) magic-sized quantum dots (MSQDs) exhibit stable luminescence that is feasible for biological applications, especially for imaging of tumor cells. For these facts, it is interesting to know the mechanisms of action of how such QDs mark biological cells. For that, simplified models are a suitable strategy. Among these models, Langmuir films of lipids formed at the air-water interface seem to be adequate since they can mimic half a membrane. They are monomolecular films formed at liquid-gas interfaces that can spontaneously form when organic solutions of amphiphilic compounds are spread on the liquid-gas interface. After solvent evaporation, the monomolecular film is formed, and a variety of techniques, including tensiometric, spectroscopic and optic can be applied. When the monolayer is formed by membrane lipids at the air-water interface, a model for half a membrane can be inferred where the aqueous subphase serve as a model for external or internal compartment of the cell. These films can be transferred to solid supports forming the so-called Langmuir-Blodgett (LB) films, and an ampler variety of techniques can be additionally used to characterize the film, allowing for the formation of devices and sensors. With these ideas in mind, the objective of this work was to investigate the specific interactions of CdSe MSQDs with tumorigenic and non-tumorigenic cells using Langmuir monolayers and LB films of lipids and specific cell extracts as membrane models for diagnosis of cancerous cells. Surface pressure-area isotherms and polarization modulation reflection-absorption spectroscopy (PM-IRRAS) showed an intrinsic interaction between the quantum dots, inserted in the aqueous subphase, and Langmuir monolayers, constructed either of selected lipids or of non-tumorigenic and tumorigenic cells extracts. The quantum dots expanded the monolayers and changed the PM-IRRAS spectra for the lipid monolayers. The mixed films were then compressed to high surface pressures and transferred from the floating monolayer to solid supports by using the LB technique. Images of the films were then obtained with atomic force microscopy (AFM) and confocal microscopy, which provided information about the morphology of the films. Similarities and differences between films with different composition representing cell membranes, with or without CdSe MSQDs, was analyzed. The results indicated that the interaction of quantum dots with the bioinspired films is modulated by the lipid composition. The properties of the normal cell monolayer were not significantly altered, whereas for the tumorigenic cell monolayer models, the films presented significant alteration. The images therefore exhibited a stronger effect of CdSe MSQDs on the models representing cancerous cells. As important implication of these findings, one may envisage for new bioinspired surfaces based on molecular recognition for biomedical applications.Keywords: biomembrane, langmuir monolayers, quantum dots, surfaces
Procedia PDF Downloads 1965990 Analysis of Photic Zone’s Summer Period-Dissolved Oxygen and Temperature as an Early Warning System of Fish Mass Mortality in Sampaloc Lake in San Pablo, Laguna
Authors: Al Romano, Jeryl C. Hije, Mechaela Marie O. Tabiolo
Abstract:
The decline in water quality is a major factor in aquatic disease outbreaks and can lead to significant mortality among aquatic organisms. Understanding the relationship between dissolved oxygen (DO) and water temperature is crucial, as these variables directly impact the health, behavior, and survival of fish populations. This study investigated how DO levels, water temperature, and atmospheric temperature interact in Sampaloc Lake to assess the risk of fish mortality. By employing a combination of linear regression models and machine learning techniques, researchers developed predictive models to forecast DO concentrations at various depths. The results indicate that while DO levels generally decrease with depth, the predicted concentrations are sufficient to support the survival of common fish species in Sampaloc Lake during March, April, and May 2025.Keywords: aquaculture, dissolved oxygen, water temperature, regression analysis, machine learning, fish mass mortality, early warning system
Procedia PDF Downloads 365989 Analysis of the Predictive Performance of Value at Risk Estimations in Times of Financial Crisis
Authors: Alexander Marx
Abstract:
Measuring and mitigating market risk is essential for the stability of enterprises, especially for major banking corporations and investment bank firms. To employ these risk measurement and mitigation processes, the Value at Risk (VaR) is the most commonly used risk metric by practitioners. In the past years, we have seen significant weaknesses in the predictive performance of the VaR in times of financial market crisis. To address this issue, the purpose of this study is to investigate the value-at-risk (VaR) estimation models and their predictive performance by applying a series of backtesting methods on the stock market indices of the G7 countries (Canada, France, Germany, Italy, Japan, UK, US, Europe). The study employs parametric, non-parametric, and semi-parametric VaR estimation models and is conducted during three different periods which cover the most recent financial market crisis: the overall period (2006–2022), the global financial crisis period (2008–2009), and COVID-19 period (2020–2022). Since the regulatory authorities have introduced and mandated the Conditional Value at Risk (Expected Shortfall) as an additional regulatory risk management metric, the study will analyze and compare both risk metrics on their predictive performance.Keywords: value at risk, financial market risk, banking, quantitative risk management
Procedia PDF Downloads 955988 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)
Authors: Azimollah Aleshzadeh, Enver Vural Yavuz
Abstract:
The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping
Procedia PDF Downloads 132