Search results for: equivalent circuit models
4639 Modelling and Simulation of Hysteresis Current Controlled Single-Phase Grid-Connected Inverter
Authors: Evren Isen
Abstract:
In grid-connected renewable energy systems, input power is controlled by AC/DC converter or/and DC/DC converter depending on output voltage of input source. The power is injected to DC-link, and DC-link voltage is regulated by inverter controlling the grid current. Inverter performance is considerable in grid-connected renewable energy systems to meet the utility standards. In this paper, modelling and simulation of hysteresis current controlled single-phase grid-connected inverter that is utilized in renewable energy systems, such as wind and solar systems, are presented. 2 kW single-phase grid-connected inverter is simulated in Simulink and modeled in Matlab-m-file. The grid current synchronization is obtained by phase locked loop (PLL) technique in dq synchronous rotating frame. Although dq-PLL can be easily implemented in three-phase systems, there is difficulty to generate β component of grid voltage in single-phase system because single-phase grid voltage exists. Inverse-Park PLL with low-pass filter is used to generate β component for grid angle determination. As grid current is controlled by constant bandwidth hysteresis current control (HCC) technique, average switching frequency and variation of switching frequency in a fundamental period are considered. 3.56% total harmonic distortion value of grid current is achieved with 0.5 A bandwidth. Average value of switching frequency and total harmonic distortion curves for different hysteresis bandwidth are obtained from model in m-file. Average switching frequency is 25.6 kHz while switching frequency varies between 14 kHz-38 kHz in a fundamental period. The average and maximum frequency difference should be considered for selection of solid state switching device, and designing driver circuit. Steady-state and dynamic response performances of the inverter depending on the input power are presented with waveforms. The control algorithm regulates the DC-link voltage by adjusting the output power.Keywords: grid-connected inverter, hysteresis current control, inverter modelling, single-phase inverter
Procedia PDF Downloads 4794638 H2/He and H2O/He Separation Experiments with Zeolite Membranes for Nuclear Fusion Applications
Authors: Rodrigo Antunes, Olga Borisevich, David Demange
Abstract:
In future nuclear fusion reactors, tritium self-sufficiency will be ensured by tritium (3H) production via reactions between the fusion neutrons and lithium. To favor tritium breeding, a neutron multiplier must also be used. Both tritium breeder and neutron multiplier will be placed in the so-called Breeding Blanket (BB). For the European Helium-Cooled Pebble Bed (HCPB) BB concept, the tritium production and neutron multiplication will be ensured by neutron bombardment of Li4SiO4 and Be pebbles, respectively. The produced tritium is extracted from the pebbles by purging them with large flows of He (~ 104 Nm3h-1), doped with small amounts of H2 (~ 0.1 vol%) to promote tritium extraction via isotopic exchange (producing HT). Due to the presence of oxygen in the pebbles, production of tritiated water is unavoidable. Therefore, the purging gas downstream of the BB will be composed by Q2/Q2O/He (Q = 1H, 2H, 3H), with Q2/Q2O down to ppm levels, which must be further processed for tritium recovery. A two-stage continuous approach, where zeolite membranes (ZMs) are followed by a catalytic membrane reactor (CMR), has been recently proposed to fulfil this task. The tritium recovery from Q2/Q2O/He is ensured by the CMR, that requires a reduction of the gas flow coming from the BB and a pre-concentration of Q2 and Q2O to be efficient. For this reason, and to keep this stage with reasonable dimensions, ZMs are required upfront to reduce as much as possible the He flows and concentrate the Q2/Q2O species. Therefore, experimental activities have been carried out at the Tritium Laboratory Karlsruhe (TLK) to test the separation performances of different zeolite membranes for H2/H2O/He. First experiments have been performed with binary mixtures of H2/He and H2O/He with commercial MFI-ZSM5 and NaA zeolite-type membranes. Only the MFI-ZSM5 demonstrated selectivity towards H2, with a separation factor around 1.5, and H2 permeances around 0.72 µmolm-2s-1Pa-1, rather independent for feed concentrations in the range 0.1 vol%-10 vol% H2/He. The experiments with H2O/He have demonstrated that the separation factor towards H2O is highly dependent on the feed concentration and temperature. For instance, at 0.2 vol% H2O/He the separation factor with NaA is below 2 and around 1000 at 5 vol% H2O/He, at 30°C. Overall, both membranes demonstrated complementary results at equivalent temperatures. In fact, at low feed concentrations ( ≤ 1 vol% H2O/He) MFI-ZSM5 separates better than NaA, whereas the latter has higher separation factors for higher inlet water content ( ≥ 5 vol% H2O/He). In this contribution, the results obtained with both MFI-ZSM5 and NaA membranes for H2/He and H2O/H2 mixtures at different concentrations and temperatures are compared and discussed.Keywords: nuclear fusion, gas separation, tritium processes, zeolite membranes
Procedia PDF Downloads 2884637 Coexistence of Two Different Types of Intermittency near the Boundary of Phase Synchronization in the Presence of Noise
Authors: Olga I. Moskalenko, Maksim O. Zhuravlev, Alexey A. Koronovskii, Alexander E. Hramov
Abstract:
Intermittent behavior near the boundary of phase synchronization in the presence of noise is studied. In certain range of the coupling parameter and noise intensity the intermittency of eyelet and ring intermittencies is shown to take place. Main results are illustrated using the example of two unidirectionally coupled Rössler systems. Similar behavior is shown to take place in two hydrodynamical models of Pierce diode coupled unidirectionally.Keywords: chaotic oscillators, phase synchronization, noise, intermittency of intermittencies
Procedia PDF Downloads 6424636 Design of Evaluation for Ehealth Intervention: A Participatory Study in Italy, Israel, Spain and Sweden
Authors: Monika Jurkeviciute, Amia Enam, Johanna Torres Bonilla, Henrik Eriksson
Abstract:
Introduction: Many evaluations of eHealth interventions conclude that the evidence for improved clinical outcomes is limited, especially when the intervention is short, such as one year. Often, evaluation design does not address the feasibility of achieving clinical outcomes. Evaluations are designed to reflect upon clinical goals of intervention without utilizing the opportunity to illuminate effects on organizations and cost. A comprehensive design of evaluation can better support decision-making regarding the effectiveness and potential transferability of eHealth. Hence, the purpose of this paper is to present a feasible and comprehensive design of evaluation for eHealth intervention, including the design process in different contexts. Methodology: The situation of limited feasibility of clinical outcomes was foreseen in the European Union funded project called “DECI” (“Digital Environment for Cognitive Inclusion”) that is run under the “Horizon 2020” program with an aim to define and test a digital environment platform within corresponding care models that help elderly people live independently. A complex intervention of eHealth implementation into elaborate care models in four different countries was planned for one year. To design the evaluation, a participative approach was undertaken using Pettigrew’s lens of change and transformations, including context, process, and content. Through a series of workshops, observations, interviews, and document analysis, as well as a review of scientific literature, a comprehensive design of evaluation was created. Findings: The findings indicate that in order to get evidence on clinical outcomes, eHealth interventions should last longer than one year. The content of the comprehensive evaluation design includes a collection of qualitative and quantitative methods for data gathering which illuminates non-medical aspects. Furthermore, it contains communication arrangements to discuss the results and continuously improve the evaluation design, as well as procedures for monitoring and improving the data collection during the intervention. The process of the comprehensive evaluation design consists of four stages: (1) analysis of a current state in different contexts, including measurement systems, expectations and profiles of stakeholders, organizational ambitions to change due to eHealth integration, and the organizational capacity to collect data for evaluation; (2) workshop with project partners to discuss the as-is situation in relation to the project goals; (3) development of general and customized sets of relevant performance measures, questionnaires and interview questions; (4) setting up procedures and monitoring systems for the interventions. Lastly, strategies are presented on how challenges can be handled during the design process of evaluation in four different countries. The evaluation design needs to consider contextual factors such as project limitations, and differences between pilot sites in terms of eHealth solutions, patient groups, care models, national and organizational cultures and settings. This implies a need for the flexible approach to evaluation design to enable judgment over the effectiveness and potential for adoption and transferability of eHealth. In summary, this paper provides learning opportunities for future evaluation designs of eHealth interventions in different national and organizational settings.Keywords: ehealth, elderly, evaluation, intervention, multi-cultural
Procedia PDF Downloads 3244635 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake
Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou
Abstract:
Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.Keywords: landsat 8, oligotrophic lake, remote sensing, water quality
Procedia PDF Downloads 3964634 Impact of the Non-Energy Sectors Diversification on the Energy Dependency Mitigation: Visualization by the “IntelSymb” Software Application
Authors: Ilaha Rzayeva, Emin Alasgarov, Orkhan Karim-Zada
Abstract:
This study attempts to consider the linkage between management and computer sciences in order to develop the software named “IntelSymb” as a demo application to prove data analysis of non-energy* fields’ diversification, which will positively influence on energy dependency mitigation of countries. Afterward, we analyzed 18 years of economic fields of development (5 sectors) of 13 countries by identifying which patterns mostly prevailed and which can be dominant in the near future. To make our analysis solid and plausible, as a future work, we suggest developing a gateway or interface, which will be connected to all available on-line data bases (WB, UN, OECD, U.S. EIA) for countries’ analysis by fields. Sample data consists of energy (TPES and energy import indicators) and non-energy industries’ (Main Science and Technology Indicator, Internet user index, and Sales and Production indicators) statistics from 13 OECD countries over 18 years (1995-2012). Our results show that the diversification of non-energy industries can have a positive effect on energy sector dependency (energy consumption and import dependence on crude oil) deceleration. These results can provide empirical and practical support for energy and non-energy industries diversification’ policies, such as the promoting of Information and Communication Technologies (ICTs), services and innovative technologies efficiency and management, in other OECD and non-OECD member states with similar energy utilization patterns and policies. Industries, including the ICT sector, generate around 4 percent of total GHG, but this is much higher — around 14 percent — if indirect energy use is included. The ICT sector itself (excluding the broadcasting sector) contributes approximately 2 percent of global GHG emissions, at just under 1 gigatonne of carbon dioxide equivalent (GtCO2eq). Ergo, this can be a good example and lesson for countries which are dependent and independent on energy, and mainly emerging oil-based economies, as well as to motivate non-energy industries diversification in order to be ready to energy crisis and to be able to face any economic crisis as well.Keywords: energy policy, energy diversification, “IntelSymb” software, renewable energy
Procedia PDF Downloads 2244633 Reactive Transport Modeling in Carbonate Rocks: A Single Pore Model
Authors: Priyanka Agrawal, Janou Koskamp, Amir Raoof, Mariette Wolthers
Abstract:
Calcite is the main mineral found in carbonate rocks, which form significant hydrocarbon reservoirs and subsurface repositories for CO2 sequestration. The injected CO2 mixes with the reservoir fluid and disturbs the geochemical equilibrium, triggering calcite dissolution. Different combinations of fluid chemistry and injection rate may therefore result in different evolution of porosity, permeability and dissolution patterns. To model the changes in porosity and permeability Kozeny-Carman equation K∝〖(∅)〗^n is used, where K is permeability and ∅ is porosity. The value of n is mostly based on experimental data or pore network models. In pore network models, this derivation is based on accuracy of relation used for conductivity and pore volume change. In fact, at a single pore scale, this relationship is the result of the pore shape development due to dissolution. We have prepared a new reactive transport model for a single pore which simulates the complex chemical reaction of carbonic-acid induced calcite dissolution and subsequent pore-geometry evolution at a single pore scale. We use COMSOL Multiphysics package 5.3 for the simulation. COMSOL utilizes the arbitary-Lagrangian Eulerian (ALE) method for the free-moving domain boundary. We examined the effect of flow rate on the evolution of single pore shape profiles due to calcite dissolution. We used three flow rates to cover diffusion dominated and advection-dominated transport regimes. The fluid in diffusion dominated flow (Pe number 0.037 and 0.37) becomes less reactive along the pore length and thus produced non-uniform pore shapes. However, for the advection-dominated flow (Pe number 3.75), the fast velocity of the fluid keeps the fluid relatively more reactive towards the end of the pore length, thus yielding uniform pore shape. Different pore shapes in terms of inlet opening vs overall pore opening will have an impact on the relation between changing volumes and conductivity. We have related the shape of pore with the Pe number which controls the transport regimes. For every Pe number, we have derived the relation between conductivity and porosity. These relations will be used in the pore network model to get the porosity and permeability variation.Keywords: single pore, reactive transport, calcite system, moving boundary
Procedia PDF Downloads 3744632 A Game Theory Analysis of the Effectiveness of Passenger Profiling for Transportation Security
Authors: Yael Deutsch, Arieh Gavious
Abstract:
The threat of aviation terrorism and its potential damage became significant after the 9/11 terror attacks. These attacks have led authorities and leaders to suggest that security personnel should overcome politically correct scruples about profiling and use it openly. However, there is a lack of knowledge about the smart usage of profiling and its advantages. We analyze game models that are suitable to specific real-world scenarios, focusing on profiling as a tool to detect potential violators, such as terrorists and smugglers. We provide analytical and clear answers to difficult questions, and by that help fighting against harmful violation acts.Keywords: game theory, profiling, security, nash equilibrium
Procedia PDF Downloads 1104631 Experimental Study and Numerical Modelling of Failure of Rocks Typical for Kuzbass Coal Basin
Authors: Mikhail O. Eremin
Abstract:
Present work is devoted to experimental study and numerical modelling of failure of rocks typical for Kuzbass coal basin (Russia). The main goal was to define strength and deformation characteristics of rocks on the base of uniaxial compression and three-point bending loadings and then to build a mathematical model of failure process for both types of loading. Depending on particular physical-mechanical characteristics typical rocks of Kuzbass coal basin (sandstones, siltstones, mudstones, etc. of different series – Kolchuginsk, Tarbagansk, Balohonsk) manifest brittle and quasi-brittle character of failure. The strength characteristics for both tension and compression are found. Other characteristics are also found from the experiment or taken from literature reviews. On the base of obtained characteristics and structure (obtained from microscopy) the mathematical and structural models are built and numerical modelling of failure under different types of loading is carried out. Effective characteristics obtained from modelling and character of failure correspond to experiment and thus, the mathematical model was verified. An Instron 1185 machine was used to carry out the experiments. Mathematical model includes fundamental conservation laws of solid mechanics – mass, impulse, energy. Each rock has a sufficiently anisotropic structure, however, each crystallite might be considered as isotropic and then a whole rock model has a quasi-isotropic structure. This idea gives an opportunity to use the Hooke’s law inside of each crystallite and thus explicitly accounting for the anisotropy of rocks and the stress-strain state at loading. Inelastic behavior is described in frameworks of two different models: von Mises yield criterion and modified Drucker-Prager yield criterion. The damage accumulation theory is also implemented in order to describe a failure process. Obtained effective characteristics of rocks are used then for modelling of rock mass evolution when mining is carried out both by an open-pit or underground opening.Keywords: damage accumulation, Drucker-Prager yield criterion, failure, mathematical modelling, three-point bending, uniaxial compression
Procedia PDF Downloads 1754630 Learning the Dynamics of Articulated Tracked Vehicles
Authors: Mario Gianni, Manuel A. Ruiz Garcia, Fiora Pirri
Abstract:
In this work, we present a Bayesian non-parametric approach to model the motion control of ATVs. The motion control model is based on a Dirichlet Process-Gaussian Process (DP-GP) mixture model. The DP-GP mixture model provides a flexible representation of patterns of control manoeuvres along trajectories of different lengths and discretizations. The model also estimates the number of patterns, sufficient for modeling the dynamics of the ATV.Keywords: Dirichlet processes, gaussian mixture models, learning motion patterns, tracked robots for urban search and rescue
Procedia PDF Downloads 4494629 Open Fields' Dosimetric Verification for a Commercially-Used 3D Treatment Planning System
Authors: Nashaat A. Deiab, Aida Radwan, Mohamed Elnagdy, Mohamed S. Yahiya, Rasha Moustafa
Abstract:
This study is to evaluate and investigate the dosimetric performance of our institution's 3D treatment planning system, Elekta PrecisePLAN, for open 6MV fields including square, rectangular, variation in SSD, centrally blocked, missing tissue, square MLC and MLC shaped fields guided by the recommended QA tests prescribed in AAPM TG53, NCS report 15 test packages, IAEA TRS 430 and ESTRO booklet no.7. The study was performed for Elekta Precise linear accelerator designed for clinical range of 4, 6 and 15 MV photon beams with asymmetric jaws and fully integrated multileaf collimator that enables high conformance to target with sharp field edges. Seven different tests were done applied on solid water equivalent phantom along with 2D array dose detection system, the calculated doses using 3D treatment planning system PrecisePLAN, compared with measured doses to make sure that the dose calculations are accurate for open fields including square, rectangular, variation in SSD, centrally blocked, missing tissue, square MLC and MLC shaped fields. The QA results showed dosimetric accuracy of the TPS for open fields within the specified tolerance limits. However large square (25cm x 25cm) and rectangular fields (20cm x 5cm) some points were out of tolerance in penumbra region (11.38 % and 10.9 %, respectively). For the test of SSD variation, the large field resulted from SSD 125 cm for 10cm x 10cm filed the results recorded an error of 0.2% at the central axis and 1.01% in penumbra. The results yielded differences within the accepted tolerance level as recommended. Large fields showed variations in penumbra. These differences between dose values predicted by the TPS and the measured values at the same point may result from limitations of the dose calculation, uncertainties in the measurement procedure, or fluctuations in the output of the accelerator.Keywords: quality assurance, dose calculation, 3D treatment planning system, photon beam
Procedia PDF Downloads 5174628 Cell-Cell Interactions in Diseased Conditions Revealed by Three Dimensional and Intravital Two Photon Microscope: From Visualization to Quantification
Authors: Satoshi Nishimura
Abstract:
Although much information has been garnered from the genomes of humans and mice, it remains difficult to extend that information to explain physiological and pathological phenomena. This is because the processes underlying life are by nature stochastic and fluctuate with time. Thus, we developed novel "in vivo molecular imaging" method based on single and two-photon microscopy. We visualized and analyzed many life phenomena, including common adult diseases. We integrated the knowledge obtained, and established new models that will serve as the basis for new minimally invasive therapeutic approaches.Keywords: two photon microscope, intravital visualization, thrombus, artery
Procedia PDF Downloads 3734627 Drivers of Liking: Probiotic Petit Suisse Cheese
Authors: Helena Bolini, Erick Esmerino, Adriano Cruz, Juliana Paixao
Abstract:
The currently concern for health has increased demand for low-calorie ingredients and functional foods as probiotics. Understand the reasons that infer on food choice, besides a challenging task, it is important step for development and/or reformulation of existing food products. The use of appropriate multivariate statistical techniques, such as External Preference Map (PrefMap), associated with regression by Partial Least Squares (PLS) can help in determining those factors. Thus, this study aimed to determine, through PLS regression analysis, the sensory attributes considered drivers of liking in probiotic petit suisse cheeses, strawberry flavor, sweetened with different sweeteners. Five samples in same equivalent sweetness: PROB1 (Sucralose 0.0243%), PROB2 (Stevia 0.1520%), PROB3 (Aspartame 0.0877%), PROB4 (Neotame 0.0025%) and PROB5 (Sucrose 15.2%) determined by just-about-right and magnitude estimation methods, and three commercial samples COM1, COM2 and COM3, were studied. Analysis was done over data coming from QDA, performed by 12 expert (highly trained assessors) on 20 descriptor terms, correlated with data from assessment of overall liking in acceptance test, carried out by 125 consumers, on all samples. Sequentially, results were submitted to PLS regression using XLSTAT software from Byossistemes. As shown in results, it was possible determine, that three sensory descriptor terms might be considered drivers of liking of probiotic petit suisse cheese samples added with sweeteners (p<0.05). The milk flavor was noticed as a sensory characteristic with positive impact on acceptance, while descriptors bitter taste and sweet aftertaste were perceived as descriptor terms with negative impact on acceptance of petit suisse probiotic cheeses. It was possible conclude that PLS regression analysis is a practical and useful tool in determining drivers of liking of probiotic petit suisse cheeses sweetened with artificial and natural sweeteners, allowing food industry to understand and improve their formulations maximizing the acceptability of their products.Keywords: acceptance, consumer, quantitative descriptive analysis, sweetener
Procedia PDF Downloads 4464626 Berry Phase and Quantum Skyrmions: A Loop Tour in Physics
Authors: Sinuhé Perea Puente
Abstract:
In several physics systems the whole can be obtained as an exact copy of each of its parts, which facilitates the study of a complex system by looking carefully at its elements, separately. Reducionism offers simplified models which makes the problems easier, but “there’s plenty of room...at the mesoscopic scale”. Here we present a tour for two of its representants: Berry phase and skyrmions, studying some of its basic definitions and properties, and two cases in which both arise together, to finish constraining the scale for our mesoscopic system in the quest of quantum skyrmions, discovering which properties are conserved and which others may be destroyed.Keywords: condensed mattter, quantum physics, skyrmions, topological defects
Procedia PDF Downloads 1464625 Data and Model-based Metamodels for Prediction of Performance of Extended Hollo-Bolt Connections
Authors: M. Cabrera, W. Tizani, J. Ninic, F. Wang
Abstract:
Open section beam to concrete-filled tubular column structures has been increasingly utilized in construction over the past few decades due to their enhanced structural performance, as well as economic and architectural advantages. However, the use of this configuration in construction is limited due to the difficulties in connecting the structural members as there is no access to the inner part of the tube to install standard bolts. Blind-bolted systems are a relatively new approach to overcome this limitation as they only require access to one side of the tubular section to tighten the bolt. The performance of these connections in concrete-filled steel tubular sections remains uncharacterized due to the complex interactions between concrete, bolt, and steel section. Over the last years, research in structural performance has moved to a more sophisticated and efficient approach consisting of machine learning algorithms to generate metamodels. This method reduces the need for developing complex, and computationally expensive finite element models, optimizing the search for desirable design variables. Metamodels generated by a data fusion approach use numerical and experimental results by combining multiple models to capture the dependency between the simulation design variables and connection performance, learning the relations between different design parameters and predicting a given output. Fully characterizing this connection will transform high-rise and multistorey construction by means of the introduction of design guidance for moment-resisting blind-bolted connections, which is currently unavailable. This paper presents a review of the steps taken to develop metamodels generated by means of artificial neural network algorithms which predict the connection stress and stiffness based on the design parameters when using Extended Hollo-Bolt blind bolts. It also provides consideration of the failure modes and mechanisms that contribute to the deformability as well as the feasibility of achieving blind-bolted rigid connections when using the blind fastener.Keywords: blind-bolted connections, concrete-filled tubular structures, finite element analysis, metamodeling
Procedia PDF Downloads 1584624 Experimental and Analytical Studies for the Effect of Thickness and Axial Load on Load-Bearing Capacity of Fire-Damaged Concrete Walls
Authors: Yeo Kyeong Lee, Ji Yeon Kang, Eun Mi Ryu, Hee Sun Kim, Yeong Soo Shin
Abstract:
The objective of this paper is an investigation of the effects of the thickness and axial loading during a fire test on the load-bearing capacity of a fire-damaged normal-strength concrete wall. Two factors are attributed to the temperature distributions in the concrete members and are mainly obtained through numerous experiments. Toward this goal, three wall specimens of different thicknesses are heated for 2 h according to the ISO-standard heating curve, and the temperature distributions through the thicknesses are measured using thermocouples. In addition, two wall specimens are heated for 2 h while simultaneously being subjected to a constant axial loading at their top sections. The test results show that the temperature distribution during the fire test depends on wall thickness and axial load during the fire test. After the fire tests, the specimens are cured for one month, followed by the loading testing. The heated specimens are compared with three unheated specimens to investigate the residual load-bearing capacities. The fire-damaged walls show a minor difference of the load-bearing capacity regarding the axial loading, whereas a significant difference became evident regarding the wall thickness. To validate the experiment results, finite element models are generated for which the material properties that are obtained for the experiment are subject to elevated temperatures, and the analytical results show sound agreements with the experiment results. The analytical method based on validated thought experimental results is applied to generate the fire-damaged walls with 2,800 mm high considering the buckling effect: typical story height of residual buildings in Korea. The models for structural analyses generated to deformation shape after thermal analysis. The load-bearing capacity of the fire-damaged walls with pin supports at both ends does not significantly depend on the wall thickness, the reason for it is restraint of pinned ends. The difference of the load-bearing capacity of fire-damaged walls as axial load during the fire is within approximately 5 %.Keywords: normal-strength concrete wall, wall thickness, axial-load ratio, slenderness ratio, fire test, residual strength, finite element analysis
Procedia PDF Downloads 2154623 Evaluating the Accuracy of Biologically Relevant Variables Generated by ClimateAP
Authors: Jing Jiang, Wenhuan XU, Lei Zhang, Shiyi Zhang, Tongli Wang
Abstract:
Climate data quality significantly affects the reliability of ecological modeling. In the Asia Pacific (AP) region, low-quality climate data hinders ecological modeling. ClimateAP, a software developed in 2017, generates high-quality climate data for the AP region, benefiting researchers in forestry and agriculture. However, its adoption remains limited. This study aims to confirm the validity of biologically relevant variable data generated by ClimateAP during the normal climate period through comparison with the currently available gridded data. Climate data from 2,366 weather stations were used to evaluate the prediction accuracy of ClimateAP in comparison with the commonly used gridded data from WorldClim1.4. Univariate regressions were applied to 48 monthly biologically relevant variables, and the relationship between the observational data and the predictions made by ClimateAP and WorldClim was evaluated using Adjusted R-Squared and Root Mean Squared Error (RMSE). Locations were categorized into mountainous and flat landforms, considering elevation, slope, ruggedness, and Topographic Position Index. Univariate regressions were then applied to all biologically relevant variables for each landform category. Random Forest (RF) models were implemented for the climatic niche modeling of Cunninghamia lanceolata. A comparative analysis of the prediction accuracies of RF models constructed with distinct climate data sources was conducted to evaluate their relative effectiveness. Biologically relevant variables were obtained from three unpublished Chinese meteorological datasets. ClimateAPv3.0 and WorldClim predictions were obtained from weather station coordinates and WorldClim1.4 rasters, respectively, for the normal climate period of 1961-1990. Occurrence data for Cunninghamia lanceolata came from integrated biodiversity databases with 3,745 unique points. ClimateAP explains a minimum of 94.74%, 97.77%, 96.89%, and 94.40% of monthly maximum, minimum, average temperature, and precipitation variances, respectively. It outperforms WorldClim in 37 biologically relevant variables with lower RMSE values. ClimateAP achieves higher R-squared values for the 12 monthly minimum temperature variables and consistently higher Adjusted R-squared values across all landforms for precipitation. ClimateAP's temperature data yields lower Adjusted R-squared values than gridded data in high-elevation, rugged, and mountainous areas but achieves higher values in mid-slope drainages, plains, open slopes, and upper slopes. Using ClimateAP improves the prediction accuracy of tree occurrence from 77.90% to 82.77%. The biologically relevant climate data produced by ClimateAP is validated based on evaluations using observations from weather stations. The use of ClimateAP leads to an improvement in data quality, especially in non-mountainous regions. The results also suggest that using biologically relevant variables generated by ClimateAP can slightly enhance climatic niche modeling for tree species, offering a better understanding of tree species adaptation and resilience compared to using gridded data.Keywords: climate data validation, data quality, Asia pacific climate, climatic niche modeling, random forest models, tree species
Procedia PDF Downloads 684622 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context
Authors: Nicole Merkle, Stefan Zander
Abstract:
Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.Keywords: ambient intelligence, machine learning, semantic web, software agents
Procedia PDF Downloads 2814621 Comparison of High Speed Railway Bride Foundation Design
Authors: Hussein Yousif Aziz
Abstract:
This paper discussed the design and analysis of bridge foundation subjected to load of train with three codes, namely AASHTO code, British Standard BS Code 8004 (1986), and Chinese code (TB10002.5-2005).The study focused on the design and analysis of bridge’s foundation manually with the three codes and found which code is better for design and controls the problem of high settlement due to the applied loads. The results showed the Chinese codes are costly that the number of reinforcement bars in the pile cap and piles is more than those with AASHTO code and BS code with the same dimensions. Settlement of the bridge was calculated depending on the data collected from the project site. The vertical ultimate bearing capacity of single pile for three codes is also discussed. Other analyses by using the two-dimensional Plaxis program and other programs like SAP2000 14, PROKON many parameters are calculated. The maximum values of the vertical displacement are close to the calculated ones. The results indicate that the AASHTO code is economics and safer in the bearing capacity of single pile. The purpose of this project is to study out the pier on the basis of the design of the pile foundation. There is a 32m simply supported beam of box section on top of the structure. The pier of bridge is round-type. The main component of the design is to calculate pile foundation and the settlement. According to the related data, we choose 1.0m in diameter bored pile of 48m. The pile is laid out in the rectangular pile cap. The dimension of the cap is 12m 9 m. Because of the interaction factors of pile groups, the load-bearing capacity of simple pile must be checked, the punching resistance of pile cap, the shearing strength of pile cap, and the part in bending of pile cap, all of them are very important to the structure stability. Also, checking soft sub-bearing capacity is necessary under the pile foundation. This project provides a deeper analysis and comparison about pile foundation design schemes. Firstly, here are brief instructions of the construction situation about the Bridge. With the actual construction geological features and the upper load on the Bridge, this paper analyzes the bearing capacity and settlement of single pile. In the paper the Equivalent Pier Method is used to calculate and analyze settlements of the piles.Keywords: pile foundation, settlement, bearing capacity, civil engineering
Procedia PDF Downloads 4214620 Prenatal Can Reduce the Burden of Preterm Birth and Low Birthweight from Maternal Sexually Transmitted Infections: US National Data
Authors: Anthony J. Kondracki, Bonzo I. Reddick, Jennifer L. Barkin
Abstract:
We sought to examine the association of maternal Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and treponema pallidum (TP) (syphilis) infections with preterm birth (PTB) (<37 weeks gestation), low birth weight (LBW) (<2500 grams) and prenatal care (PNC) attendance. This cross-sectional study was based on data drawn from the 2020 United States National Center for Health Statistics (NCHS) Natality File. We estimated the prevalence of all births, early/late PTBs, moderately/very LBW, and the distribution of sexually transmitted infections (STIs) according to maternal characteristics in the sample. In multivariable logistic regression models, we examined adjusted odds ratios (aORs) and their corresponding 95% confidence intervals (CIs) of PTB and LBW subcategories in the association with maternal/infant characteristics, PNC status, and maternal CT, NG, and TP infections. In separate logistic regression models, we assessed the risk of these newborn outcomes stratified by PNC status. Adjustments were made for race/ethnicity, age, education, marital status, health insurance, liveborn parity, previous preterm birth, gestational hypertension, gestational diabetes, PNC status, smoking, and infant sex. Additionally, in a sensitivity analysis, we assessed the association with early, full, and late term births and the potential impact of unmeasured confounding using the E-value. CT (1.8%) was most prevalent STI in pregnancy, followed by NG (0.3%), and TP (0.1%). Non-Hispanic Black women, 20-24 years old, with a high school education, and on Medicaid had the highest rate of STIs. Around 96.6% of women reported receiving PNC and about 60.0% initiated PNC early in pregnancy. PTB and LBW were strongly associated with NG infection (12.2% and 12.1%, respectively) and late initiation/no PNC (8.5% and 7.6%, respectively), and ≤10 prenatal visits received (13.1% and 10.3%, respectively). The odds of PTB and LBW were 2.5- to 3-foldhigher for each STI among women who received ≤10 prenatal visits than >10 visits. Adequate prenatal care utilization and timely screening and treatment of maternal STIs can substantially reduce the burden of adverse newborn outcomes.Keywords: low birthweight, prenatal care, preterm birth, sexually transmitted infections
Procedia PDF Downloads 1734619 Parameter Estimation via Metamodeling
Authors: Sergio Haram Sarmiento, Arcady Ponosov
Abstract:
Based on appropriate multivariate statistical methodology, we suggest a generic framework for efficient parameter estimation for ordinary differential equations and the corresponding nonlinear models. In this framework classical linear regression strategies is refined into a nonlinear regression by a locally linear modelling technique (known as metamodelling). The approach identifies those latent variables of the given model that accumulate most information about it among all approximations of the same dimension. The method is applied to several benchmark problems, in particular, to the so-called ”power-law systems”, being non-linear differential equations typically used in Biochemical System Theory.Keywords: principal component analysis, generalized law of mass action, parameter estimation, metamodels
Procedia PDF Downloads 5174618 Behavior Loss Aversion Experimental Laboratory of Financial Investments
Authors: Jihene Jebeniani
Abstract:
We proposed an approach combining both the techniques of experimental economy and the flexibility of discrete choice models in order to test the loss aversion. Our main objective was to test the loss aversion of the Cumulative Prospect Theory (CPT). We developed an experimental laboratory in the context of the financial investments that aimed to analyze the attitude towards the risk of the investors. The study uses the lotteries and is basing on econometric modeling. The estimated model was the ordered probit.Keywords: risk aversion, behavioral finance, experimental economic, lotteries, cumulative prospect theory
Procedia PDF Downloads 4714617 Evaluation of a Remanufacturing for Lithium Ion Batteries from Electric Cars
Authors: Achim Kampker, Heiner H. Heimes, Mathias Ordung, Christoph Lienemann, Ansgar Hollah, Nemanja Sarovic
Abstract:
Electric cars with their fast innovation cycles and their disruptive character offer a high degree of freedom regarding innovative design for remanufacturing. Remanufacturing increases not only the resource but also the economic efficiency by a prolonged product life time. The reduced power train wear of electric cars combined with high manufacturing costs for batteries allow new business models and even second life applications. Modular and intermountable designed battery packs enable the replacement of defective or outdated battery cells, allow additional cost savings and a prolongation of life time. This paper discusses opportunities for future remanufacturing value chains of electric cars and their battery components and how to address their potentials with elaborate designs. Based on a brief overview of implemented remanufacturing structures in different industries, opportunities of transferability are evaluated. In addition to an analysis of current and upcoming challenges, promising perspectives for a sustainable electric car circular economy enabled by design for remanufacturing are deduced. Two mathematical models describe the feasibility of pursuing a circular economy of lithium ion batteries and evaluate remanufacturing in terms of sustainability and economic efficiency. Taking into consideration not only labor and material cost but also capital costs for equipment and factory facilities to support the remanufacturing process, cost benefit analysis prognosticate that a remanufacturing battery can be produced more cost-efficiently. The ecological benefits were calculated on a broad database from different research projects which focus on the recycling, the second use and the assembly of lithium ion batteries. The results of this calculations show a significant improvement by remanufacturing in all relevant factors especially in the consumption of resources and greenhouse warming potential. Exemplarily suitable design guidelines for future remanufacturing lithium ion batteries, which consider modularity, interfaces and disassembly, are used to illustrate the findings. For one guideline, potential cost improvements were calculated and upcoming challenges are pointed out.Keywords: circular economy, electric mobility, lithium ion batteries, remanufacturing
Procedia PDF Downloads 3584616 Climate Related Financial Risk on Automobile Industry and the Impact to the Financial Institutions
Authors: Mahalakshmi Vivekanandan S.
Abstract:
As per the recent changes happening in the global policies, climate-related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate-related changes can often happen and lead to risk and a lot of uncertainty, but needs to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed so that the financial institutions can plan to mitigate it. Climate-related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and other risk types. And the models required to compute this has to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out the suggestion that the climate-related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, the author presents a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios and how the different transition risks affect the risk associated with the different parties. This research paper delves into the topic of the increase in the concentration of greenhouse gases that in turn cause global warming. It then considers the various scenarios of having the different risk drivers impacting the Credit and market risk of an institution by understanding the transmission channels and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: the automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II Capital calculations and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.Keywords: capital calculation, climate risk, credit risk, pillar ii risk, scenario modeling
Procedia PDF Downloads 1404615 Supercritical Hydrothermal and Subcritical Glycolysis Conversion of Biomass Waste to Produce Biofuel and High-Value Products
Authors: Chiu-Hsuan Lee, Min-Hao Yuan, Kun-Cheng Lin, Qiao-Yin Tsai, Yun-Jie Lu, Yi-Jhen Wang, Hsin-Yi Lin, Chih-Hua Hsu, Jia-Rong Jhou, Si-Ying Li, Yi-Hung Chen, Je-Lueng Shie
Abstract:
Raw food waste has a high-water content. If it is incinerated, it will increase the cost of treatment. Therefore, composting or energy is usually used. There are mature technologies for composting food waste. Odor, wastewater, and other problems are serious, but the output of compost products is limited. And bakelite is mainly used in the manufacturing of integrated circuit boards. It is hard to directly recycle and reuse due to its hard structure and also difficult to incinerate and produce air pollutants due to incomplete incineration. In this study, supercritical hydrothermal and subcritical glycolysis thermal conversion technology is used to convert biomass wastes of bakelite and raw kitchen wastes to carbon materials and biofuels. Batch carbonization tests are performed under high temperature and pressure conditions of solvents and different operating conditions, including wet and dry base mixed biomass. This study can be divided into two parts. In the first part, bakelite waste is performed as dry-based industrial waste. And in the second part, raw kitchen wastes (lemon, banana, watermelon, and pineapple peel) are used as wet-based biomass ones. The parameters include reaction temperature, reaction time, mass-to-solvent ratio, and volume filling rates. The yield, conversion, and recovery rates of products (solid, gas, and liquid) are evaluated and discussed. The results explore the benefits of synergistic effects in thermal glycolysis dehydration and carbonization on the yield and recovery rate of solid products. The purpose is to obtain the optimum operating conditions. This technology is a biomass-negative carbon technology (BNCT); if it is combined with carbon capture and storage (BECCS), it can provide a new direction for 2050 net zero carbon dioxide emissions (NZCDE).Keywords: biochar, raw food waste, bakelite, supercritical hydrothermal, subcritical glycolysis, biofuels
Procedia PDF Downloads 1794614 Droplet Entrainment and Deposition in Horizontal Stratified Two-Phase Flow
Authors: Joshua Kim Schimpf, Kyun Doo Kim, Jaseok Heo
Abstract:
In this study, the droplet behavior of under horizontal stratified flow regime for air and water flow in horizontal pipe experiments from a 0.24 m, 0.095 m, and 0.0486 m size diameter pipe are examined. The effects of gravity, pipe diameter, and turbulent diffusion on droplet deposition are considered. Models for droplet entrainment and deposition are proposed that considers developing length. Validation for experimental data dedicated from the REGARD, CEA and Williams, University of Illinois, experiment were performed using SPACE (Safety and Performance Analysis Code for Nuclear Power Plants).Keywords: droplet, entrainment, deposition, horizontal
Procedia PDF Downloads 3774613 Numerical Modeling of the Depth-Averaged Flow over a Hill
Authors: Anna Avramenko, Heikki Haario
Abstract:
This paper reports the development and application of a 2D depth-averaged model. The main goal of this contribution is to apply the depth averaged equations to a wind park model in which the treatment of the geometry, introduced on the mathematical model by the mass and momentum source terms. The depth-averaged model will be used in future to find the optimal position of wind turbines in the wind park. K-E and 2D LES turbulence models were consider in this article. 2D CFD simulations for one hill was done to check the depth-averaged model in practise.Keywords: depth-averaged equations, numerical modeling, CFD, wind park model
Procedia PDF Downloads 6034612 Interfacing and Replication of Electronic Machinery Using MATLAB/SIMULINK
Authors: Abdulatif Abdulsalam, Mohamed Shaban
Abstract:
This paper introduces interfacing and replication of electronic tools based on the MATLAB/ SIMULINK mock-up package. Mock-up components contain dc-dc converters, power issue rectifiers, motivation machines, dc gear, synchronous gear, and more entire systems. Power issue rectifier model includes solid state device models. The tools are the clear-cut structure and mock-up of complex energetic systems connecting with power electronic machines.Keywords: power electronics, machine, MATLAB, simulink
Procedia PDF Downloads 3584611 Seismic Isolation of Existing Masonry Buildings: Recent Case Studies in Italy
Authors: Stefano Barone
Abstract:
Seismic retrofit of buildings through base isolation represents a consolidated protection strategy against earthquakes. It consists in decoupling the ground motion from that of the structure and introducing anti-seismic devices at the base of the building, characterized by high horizontal flexibility and medium/high dissipative capacity. This allows to protect structural elements and to limit damages to non-structural ones. For these reasons, full functionality is guaranteed after an earthquake event. Base isolation is applied extensively to both new and existing buildings. For the latter, it usually does not require any interruption of the structure use and occupants evacuation, a special advantage for strategic buildings such as schools, hospitals, and military buildings. This paper describes the application of seismic isolation to three existing masonry buildings in Italy: Villa “La Maddalena” in Macerata (Marche region), “Giacomo Matteotti” and “Plinio Il Giovane” school buildings in Perugia (Umbria region). The seismic hazard of the sites is characterized by a Peak Ground Acceleration (PGA) of 0.213g-0.287g for the Life Safety Limit State and between 0.271g-0.359g for the Collapse Limit State. All the buildings are isolated with a combination of free sliders type TETRON® CD with confined elastomeric disk and anti-seismic rubber isolators type ISOSISM® HDRB to reduce the eccentricity between the center of mass and stiffness, thus limiting torsional effects during a seismic event. The isolation systems are designed to lengthen the original period of vibration (i.e., without isolators) by at least three times and to guarantee medium/high levels of energy dissipation capacity (equivalent viscous damping between 12.5% and 16%). This allows the structures to resist 100% of the seismic design action. This article shows the performances of the supplied anti-seismic devices with particular attention to the experimental dynamic response. Finally, a special focus is given to the main site activities required to isolate a masonry building.Keywords: retrofit, masonry buildings, seismic isolation, energy dissipation, anti-seismic devices
Procedia PDF Downloads 724610 Two-Level Separation of High Air Conditioner Consumers and Demand Response Potential Estimation Based on Set Point Change
Authors: Mehdi Naserian, Mohammad Jooshaki, Mahmud Fotuhi-Firuzabad, Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee
Abstract:
In recent years, the development of communication infrastructure and smart meters have facilitated the utilization of demand-side resources which can enhance stability and economic efficiency of power systems. Direct load control programs can play an important role in the utilization of demand-side resources in the residential sector. However, investments required for installing control equipment can be a limiting factor in the development of such demand response programs. Thus, selection of consumers with higher potentials is crucial to the success of a direct load control program. Heating, ventilation, and air conditioning (HVAC) systems, which due to the heat capacity of buildings feature relatively high flexibility, make up a major part of household consumption. Considering that the consumption of HVAC systems depends highly on the ambient temperature and bearing in mind the high investments required for control systems enabling direct load control demand response programs, in this paper, a recent solution is presented to uncover consumers with high air conditioner demand among large number of consumers and to measure the demand response potential of such consumers. This can pave the way for estimating the investments needed for the implementation of direct load control programs for residential HVAC systems and for estimating the demand response potentials in a distribution system. In doing so, we first cluster consumers into several groups based on the correlation coefficients between hourly consumption data and hourly temperature data using K-means algorithm. Then, by applying a recent algorithm to the hourly consumption and temperature data, consumers with high air conditioner consumption are identified. Finally, demand response potential of such consumers is estimated based on the equivalent desired temperature setpoint changes.Keywords: communication infrastructure, smart meters, power systems, HVAC system, residential HVAC systems
Procedia PDF Downloads 68