Search results for: maximal prefix code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1632

Search results for: maximal prefix code

402 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets

Authors: Kothuri Sriraman, Mattupalli Komal Teja

Abstract:

In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).

Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm

Procedia PDF Downloads 318
401 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations

Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar

Abstract:

Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.

Keywords: cache behaviour, network-on-chip, performance profiling, vectorization

Procedia PDF Downloads 174
400 The Influence of Immunity on the Behavior and Dignity of Judges

Authors: D. Avnieli

Abstract:

Immunity of judges from liability represents a departure from the principle that all are equal under the law, and that victims may be granted compensation from their offenders. The purpose of the study is to determine if judicial immunity coincides with the need to ensure the existence of highly independent and incorruptible judiciary. Judges are immune from civil and criminal liability for their judicial acts. Judicial immunity is justified by the need to maintain complete independence and discretion of the judiciary. Scholars and judges believe that absolute immunity is needed to shield judges from pressures, threats, or outside interference. It is commonly accepted, that judges should be free to perform their judicial role in accordance with their assessment of the fact and their understanding of the law, without any restrictions, influences, inducements or interferences. In most countries, immunity applies when judges act in excess of jurisdiction. In some countries, it applies even when they act maliciously or corruptly. The only exception to absolute immunity applicable in all judicial systems is when judges act without jurisdiction over the subject matter. The Israeli Supreme Court recently decided to embrace absolute immunity and strike off a lawsuit of a refugee, who was unlawfully incarcerated. The Court ruled that the plaintiff cannot sue the State or the judge for damages. The questions of malice, dignity, and public scrutiny were not discussed. This paper, based on comparative analysis of many cases, aims to determine if immunity affects the dignity and behavior of judges. It demonstrates that most judges maintain their dignity and ethical code of behavior, but sometimes do not hesitate to act consciously in excess of jurisdiction, and in rare cases even corruptly. Therefore, in order to maintain independent and incorruptible judiciary, immunity should not be applied where judges act consciously in excess of jurisdiction or with malicious incentives.

Keywords: incorruptible judiciary, immunity, independent, judicial, judges, jurisdiction

Procedia PDF Downloads 85
399 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects

Authors: Ma Yuzhe, Burra Venkata Durga Kumar

Abstract:

The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.

Keywords: Linux, operating system, system management, security

Procedia PDF Downloads 88
398 Herbal Medicinal Materials for Health/Functional Foods in Korea

Authors: Chang-Hwan Oh, Young-Jong Lee

Abstract:

In April, 2015, the Ministry of Food and Drug Safety’s announcement that only 10 of the 207 products that list Cynanchum Wilfordii Radix among their ingredients were confirmed to actually contain “iyeobupiso” the counterfeit version of the “baeksuo” raised a fog to consumers who purchased health/functional foods supposedly containing the herbal medicinal material, “baeksuo” in Korean. Baeksuo is the main ingredient of the product “EstroG-100” that contain Phlomis umbrosa and Angelica gigas too (NaturalEndoTech, S.Korea). The hot water extract of the herbal medicinal materials (HMM) was approved as a product specific Health/Functional Food (HFF) having a helpful function to women reaching menopause by Korea Food & Drug Administration (Ministry of Food & Drug Safety at present). The origin of “baeksuo” is the root of Cynanchum wilfordii Hemsley in Korea (But “iyeobupiso, the root of Cynanchum auriculatum Royle ex Wight is considered as the origin of “baeksuo” in China). In Korea, about 116 HMMs are listed as the food materials in Korea Food Code among the total 187 HMMs could be used for food and medicine purpose simultaneously. But there are some chances of the HMMs (shared use for food and medicine purpose) could be misused by the part and HMMs not permitted for HFF such as the “baeksuo” case. In this study, some of HMMs (shared use for food and medicine purpose) are examined to alleviate the misuse chance of HMMs for HFFs in Korea. For the purpose of this study, the origin, shape, edible parts, efficacy and the side effects of the similar HMMs to be misused for HFF are investigated.

Keywords: herbal medicinal materials, healthy/functional foods, misuse, shared use

Procedia PDF Downloads 272
397 Pre and Post IFRS Loss Avoidance in France and the United Kingdom

Authors: T. Miková

Abstract:

This paper analyzes the effect of a single uniform accounting rule on reporting quality by investigating the influence of IFRS on earnings management. This paper examines whether earnings management is reduced after IFRS adoption through the use of “loss avoidance thresholds”, a method that has been verified in earlier studies. This paper concentrates on two European countries: one that represents the continental code law tradition with weak protection of investors (France) and one that represents the Anglo-American common law tradition, which typically implies a strong enforcement system (the United Kingdom). The research investigates a sample of 526 companies (6822 firm-year observations) during the years 2000 – 2013. The results are different for the two jurisdictions. This study demonstrates that a single set of accounting standards contributes to better reporting quality and reduces the pervasiveness of earnings management in France. In contrast, there is no evidence that a reduction in earnings management followed the implementation of IFRS in the United Kingdom. Due to the fact that IFRS benefit France but not the United Kingdom, other political and economic factors, such legal system or capital market strength, must play a significant role in influencing the comparability and transparency cross-border companies’ financial statements. Overall, the result suggests that IFRS moderately contribute to the accounting quality of reported financial statements and bring benefit for stakeholders, though the role played by other economic factors cannot be discounted.

Keywords: accounting standards, earnings management, international financial reporting standards, loss avoidance, reporting quality

Procedia PDF Downloads 185
396 Application of Finite Volume Method for Numerical Simulation of Contaminant Transfer in a Two-Dimensional Reservoir

Authors: Atousa Ataieyan, Salvador A. Gomez-Lopera, Gennaro Sepede

Abstract:

Today, due to the growing urban population and consequently, the increasing water demand in cities, the amount of contaminants entering the water resources is increasing. This can impose harmful effects on the quality of the downstream water. Therefore, predicting the concentration of discharged pollutants at different times and distances of the interested area is of high importance in order to carry out preventative and controlling measures, as well as to avoid consuming the contaminated water. In this paper, the concentration distribution of an injected conservative pollutant in a square reservoir containing four symmetric blocks and three sources using Finite Volume Method (FVM) is simulated. For this purpose, after estimating the flow velocity, classical Advection-Diffusion Equation (ADE) has been discretized over the studying domain by Backward Time- Backward Space (BTBS) scheme. Then, the discretized equations for each node have been derived according to the initial condition, boundary conditions and point contaminant sources. Finally, taking into account the appropriate time step and space step, a computational code was set up in MATLAB. Contaminant concentration was then obtained at different times and distances. Simulation results show how using BTBS differentiating scheme and FVM as a numerical method for solving the partial differential equation of transport is an appropriate approach in the case of two-dimensional contaminant transfer in an advective-diffusive flow.

Keywords: BTBS differentiating scheme, contaminant concentration, finite volume, mass transfer, water pollution

Procedia PDF Downloads 118
395 The DAQ Debugger for iFDAQ of the COMPASS Experiment

Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius

Abstract:

In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.

Keywords: DAQ Debugger, data acquisition system, FPGA, system signals, Qt framework

Procedia PDF Downloads 262
394 Large Eddy Simulation with Energy-Conserving Schemes: Understanding Wind Farm Aerodynamics

Authors: Dhruv Mehta, Alexander van Zuijlen, Hester Bijl

Abstract:

Large Eddy Simulation (LES) numerically resolves the large energy-containing eddies of a turbulent flow, while modelling the small dissipative eddies. On a wind farm, these large scales carry the energy wind turbines extracts and are also responsible for transporting the turbines’ wakes, which may interact with downstream turbines and certainly with the atmospheric boundary layer (ABL). In this situation, it is important to conserve the energy that these wake’s carry and which could be altered artificially through numerical dissipation brought about by the schemes used for the spatial discretisation and temporal integration. Numerical dissipation has been reported to cause the premature recovery of turbine wakes, leading to an over prediction in the power produced by wind farms.An energy-conserving scheme is free from numerical dissipation and ensures that the energy of the wakes is increased or decreased only by the action of molecular viscosity or the action of wind turbines (body forces). The aim is to create an LES package with energy-conserving schemes to simulate wind turbine wakes correctly to gain insight into power-production, wake meandering etc. Such knowledge will be useful in designing more efficient wind farms with minimal wake interaction, which if unchecked could lead to major losses in energy production per unit area of the wind farm. For their research, the authors intend to use the Energy-Conserving Navier-Stokes code developed by the Energy Research Centre of the Netherlands.

Keywords: energy-conserving schemes, modelling turbulence, Large Eddy Simulation, atmospheric boundary layer

Procedia PDF Downloads 448
393 Environmental Effect on Corrosion Fatigue Behaviors of Steam Generator Forging in Simulated Pressurized Water Reactor Environment

Authors: Yakui Bai, Chen Sun, Ke Wang

Abstract:

An experimental investigation of environmental effect on fatigue behavior in SA508 Gr.3 Cl.2 Steam Generator Forging CAP1400 nuclear power plant has been carried out. In order to simulate actual loading condition, a range of strain amplitude was applied in different low cycle fatigue (LCF) tests. The current American Society of Mechanical Engineers (ASME) design fatigue code does not take full account of the interactions of environmental, loading, and material's factors. A range of strain amplitude was applied in different low cycle fatigue (LCF) tests at a strain rate of 0.01%s⁻¹. A design fatigue model was constructed by taking environmentally assisted fatigue effects into account, and the corresponding design curves were given for the convenience of engineering applications. The corrosion fatigue experiment was performed in a strain control mode in 320℃ borated and lithiated water environment to evaluate the effects of a mixed environment on fatigue life. Stress corrosion cracking (SCC) in steam generator large forging in primary water of pressurized water reactor was also observed. In addition, it is found that the CF life of SA508 Gr.3 Cl.2 decreases with increasing temperature in the water environment. The relationship between the reciprocal of temperature and the logarithm of fatigue life was found to be linear. Through experiments and subsequent analysis, the mechanisms of reduced low cycle fatigue life have been investigated for steam generator forging.

Keywords: failure behavior, low alloy steel, steam generator forging, stress corrosion cracking

Procedia PDF Downloads 101
392 A Three-Dimensional (3D) Numerical Study of Roofs Shape Impact on Air Quality in Urban Street Canyons with Tree Planting

Authors: Bouabdellah Abed, Mohamed Bouzit, Lakhdar Bouarbi

Abstract:

The objective of this study is to investigate numerically the effect of roof shaped on wind flow and pollutant dispersion in a street canyon with one row of trees of pore volume, Pvol = 96%. A three-dimensional computational fluid dynamics (CFD) model for evaluating air flow and pollutant dispersion within an urban street canyon using Reynolds-averaged Navier–Stokes (RANS) equations and the k-Epsilon EARSM turbulence model as close of the equation system. The numerical model is performed with ANSYS-CFX code. Vehicle emissions were simulated as double line sources along the street. The numerical model was validated against the wind tunnel experiment. Having established this, the wind flow and pollutant dispersion in urban street canyons of six roof shapes are simulated. The numerical simulation agrees reasonably with the wind tunnel data. The results obtained in this work, indicate that the flow in 3D domain is more complicated, this complexity is increased with presence of tree and variability of the roof shapes. The results also indicated that the largest pollutant concentration level for two walls (leeward and windward wall) is observed with the upwind wedge-shaped roof. But the smallest pollutant concentration level is observed with the dome roof-shaped. The results also indicated that the corners eddies provide additional ventilation and lead to lower traffic pollutant concentrations at the street canyon ends.

Keywords: street canyon, pollutant dispersion, trees, building configuration, numerical simulation, k-Epsilon EARSM

Procedia PDF Downloads 331
391 Parameter Identification Analysis in the Design of Rock Fill Dams

Authors: G. Shahzadi, A. Soulaimani

Abstract:

This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.

Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS

Procedia PDF Downloads 118
390 Behavior of the RC Slab Subjected to Impact Loading According to the DIF

Authors: Yong Jae Yu, Jae-Yeol Cho

Abstract:

In the design of structural concrete for impact loading, design or model codes often employ a dynamic increase factor (DIF) to impose dynamic effect on static response. Dynamic increase factors that are obtained from laboratory material test results and that are commonly given as a function of strain rate only are quite different from each other depending on the design concept of design codes like ACI 349M-06, fib Model Code 2010 and ACI 370R-14. Because the dynamic increase factors currently adopted in the codes are too simple and limited to consider a variety of strength of materials, their application in practical design is questionable. In this study, the dynamic increase factors used in the three codes were validated through the finite element analysis of reinforced concrete slab elements which were tested and reported by other researcher. The test was intended to simulate a wall element of the containment building in nuclear power plants that is assumed to be subject to impact scenario that the Pentagon experienced on September 11, 2001. The finite element analysis was performed using the ABAQAUS 6.10 and the plasticity models were employed for the concrete, reinforcement. The dynamic increase factors given in the three codes were applied to the stress-strain curves of the materials. To estimate the dynamic increase factors, strain rate was adopted as a parameter. Comparison of the test and analysis was done with regard to perforation depth, maximum deflection, and surface crack area of the slab. Consequently, it was found that DIF has so great an effect on the behavior of the reinforced concrete structures that selection of DIF should be very careful. The result implies that DIF should be provided in design codes in more delicate format considering various influence factors.

Keywords: impact, strain rate, DIF, slab elements

Procedia PDF Downloads 277
389 A Three Elements Vector Valued Structure’s Ultimate Strength-Strong Motion-Intensity Measure

Authors: A. Nicknam, N. Eftekhari, A. Mazarei, M. Ganjvar

Abstract:

This article presents an alternative collapse capacity intensity measure in the three elements form which is influenced by the spectral ordinates at periods longer than that of the first mode period at near and far source sites. A parameter, denoted by β, is defined by which the spectral ordinate effects, up to the effective period (2T_1), on the intensity measure are taken into account. The methodology permits to meet the hazard-levelled target extreme event in the probabilistic and deterministic forms. A MATLAB code is developed involving OpenSees to calculate the collapse capacities of the 8 archetype RC structures having 2 to 20 stories for regression process. The incremental dynamic analysis (IDA) method is used to calculate the structure’s collapse values accounting for the element stiffness and strength deterioration. The general near field set presented by FEMA is used in a series of performing nonlinear analyses. 8 linear relationships are developed for the 8structutres leading to the correlation coefficient up to 0.93. A collapse capacity near field prediction equation is developed taking into account the results of regression processes obtained from the 8 structures. The proposed prediction equation is validated against a set of actual near field records leading to a good agreement. Implementation of the proposed equation to the four archetype RC structures demonstrated different collapse capacities at near field site compared to those of FEMA. The reasons of differences are believed to be due to accounting for the spectral shape effects.

Keywords: collapse capacity, fragility analysis, spectral shape effects, IDA method

Procedia PDF Downloads 212
388 Modeling Sediment Transports under Extreme Storm Situation along Persian Gulf North Coast

Authors: Majid Samiee Zenoozian

Abstract:

The Persian Gulf is a bordering sea with an normal depth of 35 m and a supreme depth of 100 m near its narrow appearance. Its lengthen bathymetric axis divorces two main geological shires — the steady Arabian Foreland and the unbalanced Iranian Fold Belt — which are imitated in the conflicting shore and bathymetric morphologies of Arabia and Iran. The sediments were experimented with from 72 offshore positions through an oceanographic cruise in the winter of 2018. Throughout the observation era, several storms and river discharge actions happened, as well as the major flood on record since 1982. Suspended-sediment focus at all three sites varied in reaction to both wave resuspension and advection of river-derived sediments. We used hydrological models to evaluation and associate the wave height and inundation distance required to carriage the rocks inland. Our results establish that no known or possible storm happening on the Makran coast is accomplished of detaching and transporting the boulders. The fluid mud consequently is conveyed seaward due to gravitational forcing. The measured sediment focus and velocity profiles on the shelf provide a strong indication to provision this assumption. The sediment model is joined with a 3D hydrodynamic module in the Environmental Fluid Dynamics Code (EFDC) model that offers data on estuarine rotation and salinity transport under normal temperature conditions. 3-D sediment transport from model simulations specify dynamic sediment resuspension and transport near zones of highly industrious oyster beds.

Keywords: sediment transport, storm, coast, fluid dynamics

Procedia PDF Downloads 89
387 Framework Proposal on How to Use Game-Based Learning, Collaboration and Design Challenges to Teach Mechatronics

Authors: Michael Wendland

Abstract:

This paper presents a framework to teach a methodical design approach by the help of using a mixture of game-based learning, design challenges and competitions as forms of direct assessment. In today’s world, developing products is more complex than ever. Conflicting goals of product cost and quality with limited time as well as post-pandemic part shortages increase the difficulty. Common design approaches for mechatronic products mitigate some of these effects by helping the users with their methodical framework. Due to the inherent complexity of these products, the number of involved resources and the comprehensive design processes, students very rarely have enough time or motivation to experience a complete approach in one semester course. But, for students to be successful in the industrial world, it is crucial to know these methodical frameworks and to gain first-hand experience. Therefore, it is necessary to teach these design approaches in a real-world setting and keep the motivation high as well as learning to manage upcoming problems. This is achieved by using a game-based approach and a set of design challenges that are given to the students. In order to mimic industrial collaboration, they work in teams of up to six participants and are given the main development target to design a remote-controlled robot that can manipulate a specified object. By setting this clear goal without a given solution path, a constricted time-frame and limited maximal cost, the students are subjected to similar boundary conditions as in the real world. They must follow the methodical approach steps by specifying requirements, conceptualizing their ideas, drafting, designing, manufacturing and building a prototype using rapid prototyping. At the end of the course, the prototypes will be entered into a contest against the other teams. The complete design process is accompanied by theoretical input via lectures which is immediately transferred by the students to their own design problem in practical sessions. To increase motivation in these sessions, a playful learning approach has been chosen, i.e. designing the first concepts is supported by using lego construction kits. After each challenge, mandatory online quizzes help to deepen the acquired knowledge of the students and badges are awarded to those who complete a quiz, resulting in higher motivation and a level-up on a fictional leaderboard. The final contest is held in presence and involves all teams with their functional prototypes that now need to contest against each other. Prices for the best mechanical design, the most innovative approach and for the winner of the robotic contest are awarded. Each robot design gets evaluated with regards to the specified requirements and partial grades are derived from the results. This paper concludes with a critical review of the proposed framework, the game-based approach for the designed prototypes, the reality of the boundary conditions, the problems that occurred during the design and manufacturing process, the experiences and feedback of the students and the effectiveness of their collaboration as well as a discussion of the potential transfer to other educational areas.

Keywords: design challenges, game-based learning, playful learning, methodical framework, mechatronics, student assessment, constructive alignment

Procedia PDF Downloads 52
386 Predicting Food Waste and Losses Reduction for Fresh Products in Modified Atmosphere Packaging

Authors: Matar Celine, Gaucel Sebastien, Gontard Nathalie, Guilbert Stephane, Guillard Valerie

Abstract:

To increase the very short shelf life of fresh fruits and vegetable, Modified Atmosphere Packaging (MAP) allows an optimal atmosphere composition to be maintained around the product and thus prevent its decay. This technology relies on the modification of internal packaging atmosphere due to equilibrium between production/consumption of gases by the respiring product and gas permeation through the packaging material. While, to the best of our knowledge, benefit of MAP for fresh fruits and vegetable has been widely demonstrated in the literature, its effect on shelf life increase has never been quantified and formalized in a clear and simple manner leading difficult to anticipate its economic and environmental benefit, notably through the decrease of food losses. Mathematical modelling of mass transfers in the food/packaging system is the basis for a better design and dimensioning of the food packaging system. But up to now, existing models did not permit to estimate food quality nor shelf life gain reached by using MAP. However, shelf life prediction is an indispensable prerequisite for quantifying the effect of MAP on food losses reduction. The objective of this work is to propose an innovative approach to predict shelf life of MAP food product and then to link it to a reduction of food losses and wastes. In this purpose, a ‘Virtual MAP modeling tool’ was developed by coupling a new predictive deterioration model (based on visual surface prediction of deterioration encompassing colour, texture and spoilage development) with models of the literature for respiration and permeation. A major input of this modelling tool is the maximal percentage of deterioration (MAD) which was assessed from dedicated consumers’ studies. Strawberries of the variety Charlotte were selected as the model food for its high perishability, high respiration rate; 50-100 ml CO₂/h/kg produced at 20°C, allowing it to be a good representative of challenging post-harvest storage. A value of 13% was determined as a limit of acceptability for the consumers, permitting to define products’ shelf life. The ‘Virtual MAP modeling tool’ was validated in isothermal conditions (5, 10 and 20°C) and in dynamic temperature conditions mimicking commercial post-harvest storage of strawberries. RMSE values were systematically lower than 3% for respectively, O₂, CO₂ and deterioration profiles as a function of time confirming the goodness of model fitting. For the investigated temperature profile, a shelf life gain of 0.33 days was obtained in MAP compared to the conventional storage situation (no MAP condition). Shelf life gain of more than 1 day could be obtained for optimized post-harvest conditions as numerically investigated. Such shelf life gain permitted to anticipate a significant reduction of food losses at the distribution and consumer steps. This food losses' reduction as a function of shelf life gain has been quantified using a dedicated mathematical equation that has been developed for this purpose.

Keywords: food losses and wastes, modified atmosphere packaging, mathematical modeling, shelf life prediction

Procedia PDF Downloads 165
385 Non-Steroidal Microtubule Disrupting Analogues Induce Programmed Cell Death in Breast and Lung Cancer Cell Lines

Authors: Marcel Verwey, Anna M. Joubert, Elsie M. Nolte, Wolfgang Dohle, Barry V. L. Potter, Anne E. Theron

Abstract:

A tetrahydroisoquinolinone (THIQ) core can be used to mimic the A,B-ring of colchicine site-binding microtubule disruptors such as 2-methoxyestradiol in the design of anti-cancer agents. Steroidomimeric microtubule disruptors were synthesized by introducing C'2 and C'3 of the steroidal A-ring to C'6 and C'7 of the THIQ core and by introducing a decorated hydrogen bond acceptor motif projecting from the steroidal D-ring to N'2. For this in vitro study, four non-steroidal THIQ-based analogues were investigated and comparative studies were done between the non-sulphamoylated compound STX 3450 and the sulphamoylated compounds STX 2895, STX 3329 and STX 3451. The objective of this study was to investigate the modes of cell death induced by these four THIQ-based analogues in A549 lung carcinoma epithelial cells and metastatic breast adenocarcinoma MDA-MB-231 cells. Cytotoxicity studies to determine the half maximal growth inhibitory concentrations were done using spectrophotometric quantification via crystal violet staining and lactate dehydrogenase (LDH) assays. Microtubule integrity and morphologic changes of exposed cells were investigated using polarization-optical transmitted light differential interference contrast microscopy, transmission electron microscopy and confocal microscopy. Flow cytometric quantification was used to determine apoptosis induction and the effect that THIQ-based analogues have on cell cycle progression. Signal transduction pathways were elucidated by quantification of the mitochondrial membrane integrity, cytochrome c release and caspase 3, -6 and -8 activation. Induction of autophagic cell death by the THIQ-based analogues was investigated by morphological assessment of fluorescent monodansylcadaverine (MDC) staining of acidic vacuoles and by quantifying aggresome formation via flow cytometry. Results revealed that these non-steroidal microtubule disrupting analogues inhibited 50% of cell growth at nanomolar concentrations. Immunofluorescence microscopy indicated microtubule depolarization and the resultant mitotic arrest was further confirmed through cell cycle analysis. Apoptosis induction via the intrinsic pathway was observed due to depolarization of the mitochondrial membrane, induction of cytochrome c release as well as, caspase 3 activation. Potential involvement of programmed cell death type II was observed due to the presence of acidic vacuoles and aggresome formation. Necrotic cell death did not contribute significantly, indicated by stable LDH levels. This in vitro study revealed the induction of the intrinsic apoptotic pathway as well as possible involvement of autophagy after exposure to these THIQ-based analogues in both MDA-MB-231- and A549 cells. Further investigation of this series of anticancer drugs still needs to be conducted to elucidate the temporal, mechanistic and functional crosstalk mechanisms between the two observed programmed cell deaths pathways.

Keywords: apoptosis, autophagy, cancer, microtubule disruptor

Procedia PDF Downloads 230
384 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: deflagration, large eddy simulation, turbulent combustion, vented enclosure

Procedia PDF Downloads 223
383 Symmetric Key Encryption Algorithm Using Indian Traditional Musical Scale for Information Security

Authors: Aishwarya Talapuru, Sri Silpa Padmanabhuni, B. Jyoshna

Abstract:

Cryptography helps in preventing threats to information security by providing various algorithms. This study introduces a new symmetric key encryption algorithm for information security which is linked with the "raagas" which means Indian traditional scale and pattern of music notes. This algorithm takes the plain text as input and starts its encryption process. The algorithm then randomly selects a raaga from the list of raagas that is assumed to be present with both sender and the receiver. The plain text is associated with the thus selected raaga and an intermediate cipher-text is formed as the algorithm converts the plain text characters into other characters, depending upon the rules of the algorithm. This intermediate code or cipher text is arranged in various patterns in three different rounds of encryption performed. The total number of rounds in the algorithm is equal to the multiples of 3. To be more specific, the outcome or output of the sequence of first three rounds is again passed as the input to this sequence of rounds recursively, till the total number of rounds of encryption is performed. The raaga selected by the algorithm and the number of rounds performed will be specified at an arbitrary location in the key, in addition to important information regarding the rounds of encryption, embedded in the key which is known by the sender and interpreted only by the receiver, thereby making the algorithm hack proof. The key can be constructed of any number of bits without any restriction to the size. A software application is also developed to demonstrate this process of encryption, which dynamically takes the plain text as input and readily generates the cipher text as output. Therefore, this algorithm stands as one of the strongest tools for information security.

Keywords: cipher text, cryptography, plaintext, raaga

Procedia PDF Downloads 265
382 The Effect of Post Spinal Hypotension on Cerebral Oxygenation Using Near-Infrared Spectroscopy and Neonatal Outcomes in Full Term Parturient Undergoing Lower Segment Caesarean Section: A Prospective Observational Study

Authors: Shailendra Kumar, Lokesh Kashyap, Puneet Khanna, Nishant Patel, Rakesh Kumar, Arshad Ayub, Kelika Prakash, Yudhyavir Singh, Krithikabrindha V.

Abstract:

Introduction: Spinal anesthesia is considered a standard anesthesia technique for caesarean delivery. The incidence of spinal hypotension during caesarean delivery is 70 -80%. Spinal hypotension may cause cerebral hypoperfusion in the mother, but physiologically cerebral autoregulatory mechanisms accordingly prevent cerebral hypoxia. Cerebral blood flow remains constant in the 50-150 mmHg of Cerebral Perfusion Pressure (CPP) range. Near-infrared spectroscopy (NIRS) is a non-invasive technology that is used to detect Cerebral Desaturation Events (CDEs) immediately compared to other conventional intraoperative monitoring techniques. Objective: The primary aim of the study is to correlate the change in cerebral oxygen saturation using NIRS with respect to a fall in mean blood pressure after spinal anaesthesia and to find out the effects of spinal hypotension on neonatal APGAR score, neonatal acid-base variations, and presence of Postoperative Delirium (POD). Methodology: NIRS sensors were attached to the forehead of all the patients, and their baseline readings of cerebral oxygenation on the right and left frontal regions and mean blood pressure were noted. Subarachnoid block was given with hyperbaric 0.5% bupivacaine plus fentanyl, the dose being determined by the individual anaesthesiologist. Co-loading of IV crystalloid solutions was given to the patient. Blood pressure reading and cerebral saturation were recorded every 1 minute till 30min. Hypotension was a fall in MAP less than 20% of the baseline values. Patients going for hypotension were treated with an IV Bolus of phenylephrine/ephedrine. Umbilical cord blood samples were taken for blood gas analysis, and neonatal APGAR was noted by a neonatologist. Study design: A prospective observational study conducted in a population of Thirty ASA 2 and 3 parturients scheduled for lower segment caesarean section (LSCS). Results: Mean fall in regional cerebral saturation is 28.48 ± 14.7% with respect to the mean fall in blood pressure 38.92 ± 8.44 mm Hg. The correlation coefficient between fall in saturation and fall in mean blood pressure is 0.057, and p-value {0.7} after subarachnoid block. A fall in regional cerebral saturation occurred 2±1 min before a fall in mean blood pressure. Twenty-nine out of thirty patients required vasopressors during hypotension. The first dose of vasopressor requirement is needed at 6.02±2 min after the block. The mean APGAR score was 7.86 and 9.74 at 1 and 5 min of birth, respectively, and the mean umbilical arterial pH of 7.3±0.1. According to DRS-98 (Delirium Rating Scale), the mean delirium rating score on postoperative day 1 and day 2 were 0.1 and 0.7, respectively. Discussion: There was a fall in regional cerebral oxygen saturation, which started before with respect to a significant fall in mean blood pressure readings but was statistically not significant. Maximal fall in blood pressure requiring vasopressors occurs within 10 min of SAB. Neonatal APGAR scores and acid-base variations were in the normal range with maternal hypotension, and there was no incidence of postoperative delirium in patients with post-spinal hypotension.

Keywords: cerebral oxygenation, LSCS, NIRS, spinal hypotension

Procedia PDF Downloads 48
381 A Mixed 3D Finite Element for Highly Deformable Thermoviscoplastic Materials Under Ductile Damage

Authors: João Paulo Pascon

Abstract:

In this work, a mixed 3D finite element formulation is proposed in order to analyze thermoviscoplastic materials under large strain levels and ductile damage. To this end, a tetrahedral element of linear order is employed, considering a thermoviscoplastic constitutive law together with the neo-Hookean hyperelastic relationship and a nonlocal Gurson`s porous plasticity theory The material model is capable of reproducing finite deformations, elastoplastic behavior, void growth, nucleation and coalescence, thermal effects such as plastic work heating and conductivity, strain hardening and strain-rate dependence. The nonlocal character is introduced by means of a nonlocal parameter applied to the Laplacian of the porosity field. The element degrees of freedom are the nodal values of the deformed position, the temperature and the nonlocal porosity field. The internal variables are updated at the Gauss points according to the yield criterion and the evolution laws, including the yield stress of matrix, the equivalent plastic strain, the local porosity and the plastic components of the Cauchy-Green stretch tensor. Two problems involving 3D specimens and ductile damage are numerically analyzed with the developed computational code: the necking problem and a notched sample. The effect of the nonlocal parameter and the mesh refinement is investigated in detail. Results indicate the need of a proper nonlocal parameter. In addition, the numerical formulation can predict ductile fracture, based on the evolution of the fully damaged zone.

Keywords: mixed finite element, large strains, ductile damage, thermoviscoplasticity

Procedia PDF Downloads 67
380 Multiaxial Fatigue Analysis of a High Performance Nickel-Based Superalloy

Authors: P. Selva, B. Lorraina, J. Alexis, A. Seror, A. Longuet, C. Mary, F. Denard

Abstract:

Over the past four decades, the fatigue behavior of nickel-based alloys has been widely studied. However, in recent years, significant advances in the fabrication process leading to grain size reduction have been made in order to improve fatigue properties of aircraft turbine discs. Indeed, a change in particle size affects the initiation mode of fatigue cracks as well as the fatigue life of the material. The present study aims to investigate the fatigue behavior of a newly developed nickel-based superalloy under biaxial-planar loading. Low Cycle Fatigue (LCF) tests are performed at different stress ratios so as to study the influence of the multiaxial stress state on the fatigue life of the material. Full-field displacement and strain measurements as well as crack initiation detection are obtained using Digital Image Correlation (DIC) techniques. The aim of this presentation is first to provide an in-depth description of both the experimental set-up and protocol: the multiaxial testing machine, the specific design of the cruciform specimen and performances of the DIC code are introduced. Second, results for sixteen specimens related to different load ratios are presented. Crack detection, strain amplitude and number of cycles to crack initiation vs. triaxial stress ratio for each loading case are given. Third, from fractographic investigations by scanning electron microscopy it is found that the mechanism of fatigue crack initiation does not depend on the triaxial stress ratio and that most fatigue cracks initiate from subsurface carbides.

Keywords: cruciform specimen, multiaxial fatigue, nickel-based superalloy

Procedia PDF Downloads 272
379 Central Finite Volume Methods Applied in Relativistic Magnetohydrodynamics: Applications in Disks and Jets

Authors: Raphael de Oliveira Garcia, Samuel Rocha de Oliveira

Abstract:

We have developed a new computer program in Fortran 90, in order to obtain numerical solutions of a system of Relativistic Magnetohydrodynamics partial differential equations with predetermined gravitation (GRMHD), capable of simulating the formation of relativistic jets from the accretion disk of matter up to his ejection. Initially we carried out a study on numerical methods of unidimensional Finite Volume, namely Lax-Friedrichs, Lax-Wendroff, Nessyahu-Tadmor method and Godunov methods dependent on Riemann problems, applied to equations Euler in order to verify their main features and make comparisons among those methods. It was then implemented the method of Finite Volume Centered of Nessyahu-Tadmor, a numerical schemes that has a formulation free and without dimensional separation of Riemann problem solvers, even in two or more spatial dimensions, at this point, already applied in equations GRMHD. Finally, the Nessyahu-Tadmor method was possible to obtain stable numerical solutions - without spurious oscillations or excessive dissipation - from the magnetized accretion disk process in rotation with respect to a central black hole (BH) Schwarzschild and immersed in a magnetosphere, for the ejection of matter in the form of jet over a distance of fourteen times the radius of the BH, a record in terms of astrophysical simulation of this kind. Also in our simulations, we managed to get substructures jets. A great advantage obtained was that, with the our code, we got simulate GRMHD equations in a simple personal computer.

Keywords: finite volume methods, central schemes, fortran 90, relativistic astrophysics, jet

Procedia PDF Downloads 424
378 Analysis and Performance of European Geostationary Navigation Overlay Service System in North of Algeria for GPS Single Point Positioning

Authors: Tabti Lahouaria, Kahlouche Salem, Benadda Belkacem, Beldjilali Bilal

Abstract:

The European Geostationary Navigation Overlay Service (EGNOS) provides an augmentation signal to GPS (Global Positioning System) single point positioning. Presently EGNOS provides data correction and integrity information using the GPS L1 (1575.42 MHz) frequency band. The main objective of this system is to provide a better real-time positioning precision than using GPS only. They are expected to be used with single-frequency code observations. EGNOS offers navigation performance for an open service (OS), in terms of precision and availability this performance gradually degrades as moving away from the service area. For accurate system performance, the service will become less and less available as the user moves away from the EGNOS service. The improvement in position solution is investigated using the two collocated dual frequency GPS, where no EGNOS Ranging and Integrity Monitoring Station (RIMS) exists. One of the pseudo-range was kept as GPS stand-alone and the other was corrected by EGNOS to estimate the planimetric and altimetric precision for different dates. It is found that precision in position improved significantly in the second due to EGNOS correction. The performance of EGNOS system in the north of Algeria is also investigated in terms of integrity. The results show that the horizontal protection level (HPL) value is below 18.25 meters (95%) and the vertical protection level (VPL) is below 42.22 meters (95 %). These results represent good integrity information transmitted by EGNOS for APV I service. This service is thus compliant with the aviation requirements for Approaches with Vertical Guidance (APV-I), which is characterised by 40 m HAL (horizontal alarm limit) and 50 m VAL (vertical alarm limit).

Keywords: EGNOS, GPS, positioning, integrity, protection level

Procedia PDF Downloads 206
377 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images

Authors: Sophia Shi

Abstract:

Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.

Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG

Procedia PDF Downloads 112
376 Computational Fluid Dynamics Modeling of Liquefaction of Wood and It's Model Components Using a Modified Multistage Shrinking-Core Model

Authors: K. G. R. M. Jayathilake, S. Rudra

Abstract:

Wood degradation in hot compressed water is modeled with a Computational Fluid Dynamics (CFD) code using cellulose, xylan, and lignin as model compounds. Model compounds are reacted under catalyst-free conditions in a temperature range from 250 to 370 °C. Using a simplified reaction scheme where water soluble products, methanol soluble products, char like compounds and gas are generated through intermediates with each model compound. A modified multistage shrinking core model is developed to simulate particle degradation. In the modified shrinking core model, each model compound is hydrolyzed in separate stages. Cellulose is decomposed to glucose/oligomers before producing degradation products. Xylan is decomposed through xylose and then to degradation products where lignin is decomposed into soluble products before producing the total guaiacol, organic carbon (TOC) and then char and gas. Hydrolysis of each model compound is used as the main reaction of the process. Diffusion of water monomers to the particle surface to initiate hydrolysis and dissolution of the products in water is given importance during the modeling process. In the developed model the temperature variation depends on the Arrhenius relationship. Kinetic parameters from the literature are used for the mathematical model. Meanwhile, limited initial fast reaction kinetic data limit the development of more accurate CFD models. Liquefaction results of the CFD model are analyzed and validated using the experimental data available in the literature where it shows reasonable agreement.

Keywords: computational fluid dynamics, liquefaction, shrinking-core, wood

Procedia PDF Downloads 101
375 An Analysis of Legal and Ethical Implications of Sports Doping in India

Authors: Prathyusha Samvedam, Hiranmaya Nanda

Abstract:

Doping refers to the practice of using drugs or practices that enhance an athlete's performance. This is a problem that occurs on a worldwide scale and compromises the fairness of athletic tournaments. There are rules that have been created on both the national and international levels in order to prevent doping. However, these rules sometimes contradict one another, and it is possible that they don't do a very good job of prohibiting people from using PEDs. This study will contend that India's inability to comply with specific Code criteria, as well as its failure to satisfy "best practice" standards established by other countries, demonstrates a lack of uniformity in the implementation of anti-doping regulations and processes among nations. Such challenges have the potential to undermine the validity of the anti-doping system, particularly in developing nations like India. This article on the legislative framework in India governing doping in sports is very important. To begin, doping in sports is a significant problem that affects the spirit of fair play and sportsmanship. Moreover, it has the potential to jeopardize the integrity of the sport itself. In addition, the research has the potential to educate policymakers, sports organizations, and other stakeholders about the current legal framework and how well it discourages doping in athletic competitions. This article is divided into four distinct sections. The first section offers an explanation of what doping is and provides some context about its development throughout time. Followed the role of anti-doping authorities and the responsibilities they perform are investigated. Case studies and the research technique that will be employed for the study are in the third section; finally, the results are presented in the last section. In conclusion, doping is a severe problem that endangers the honest competition that exists within sports.

Keywords: sports law, doping, NADA, WADA, performance enhancing drugs, anti-doping bill 2022

Procedia PDF Downloads 47
374 Evaluation of Heat Transfer and Entropy Generation by Al2O3-Water Nanofluid

Authors: Houda Jalali, Hassan Abbassi

Abstract:

In this numerical work, natural convection and entropy generation of Al2O3–water nanofluid in square cavity have been studied. A two-dimensional steady laminar natural convection in a differentially heated square cavity of length L, filled with a nanofluid is investigated numerically. The horizontal walls are considered adiabatic. Vertical walls corresponding to x=0 and x=L are respectively maintained at hot temperature, Th and cold temperature, Tc. The resolution is performed by the CFD code "FLUENT" in combination with GAMBIT as mesh generator. These simulations are performed by maintaining the Rayleigh numbers varied as 103 ≤ Ra ≤ 106, while the solid volume fraction varied from 1% to 5%, the particle size is fixed at dp=33 nm and a range of the temperature from 20 to 70 °C. We used models of thermophysical nanofluids properties based on experimental measurements for studying the effect of adding solid particle into water in natural convection heat transfer and entropy generation of nanofluid. Such as models of thermal conductivity and dynamic viscosity which are dependent on solid volume fraction, particle size and temperature. The average Nusselt number is calculated at the hot wall of the cavity in a different solid volume fraction. The most important results is that at low temperatures (less than 40 °C), the addition of nanosolids Al2O3 into water leads to a decrease in heat transfer and entropy generation instead of the expected increase, whereas at high temperature, heat transfer and entropy generation increase with the addition of nanosolids. This behavior is due to the contradictory effects of viscosity and thermal conductivity of the nanofluid. These effects are discussed in this work.

Keywords: entropy generation, heat transfer, nanofluid, natural convection

Procedia PDF Downloads 251
373 Seismic Loss Assessment for Peruvian University Buildings with Simulated Fragility Functions

Authors: Jose Ruiz, Jose Velasquez, Holger Lovon

Abstract:

Peruvian university buildings are critical structures for which very little research about its seismic vulnerability is available. This paper develops a probabilistic methodology that predicts seismic loss for university buildings with simulated fragility functions. Two university buildings located in the city of Cusco were analyzed. Fragility functions were developed considering seismic and structural parameters uncertainty. The fragility functions were generated with the Latin Hypercube technique, an improved Montecarlo-based method, which optimizes the sampling of structural parameters and provides at least 100 reliable samples for every level of seismic demand. Concrete compressive strength, maximum concrete strain and yield stress of the reinforcing steel were considered as the key structural parameters. The seismic demand is defined by synthetic records which are compatible with the elastic Peruvian design spectrum. Acceleration records are scaled based on the peak ground acceleration on rigid soil (PGA) which goes from 0.05g to 1.00g. A total of 2000 structural models were considered to account for both structural and seismic variability. These functions represent the overall building behavior because they give rational information regarding damage ratios for defined levels of seismic demand. The university buildings show an expected Mean Damage Factor of 8.80% and 19.05%, respectively, for the 0.22g-PGA scenario, which was amplified by the soil type coefficient and resulted in 0.26g-PGA. These ratios were computed considering a seismic demand related to 10% of probability of exceedance in 50 years which is a requirement in the Peruvian seismic code. These results show an acceptable seismic performance for both buildings.

Keywords: fragility functions, university buildings, loss assessment, Montecarlo simulation, latin hypercube

Procedia PDF Downloads 117