Search results for: Dynamic Memory Leak Detection (DMLD).
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3758

Search results for: Dynamic Memory Leak Detection (DMLD).

218 Molecular Dynamics Simulation of the Effect of the Solid Gas Interface Nanolayer on Enhanced Thermal Conductivity of Copper-CO2 Nanofluid

Authors: Zeeshan Ahmed, Ajinkya Sarode, Pratik Basarkar, Atul Bhargav, Debjyoti Banerjee

Abstract:

The use of CO2 in oil recovery and in CO2 capture and storage is gaining traction in recent years. These applications involve heat transfer between CO2 and the base fluid, and hence, there arises a need to improve the thermal conductivity of CO2 to increase the process efficiency and reduce cost. One way to improve the thermal conductivity is through nanoparticle addition in the base fluid. The nanofluid model in this study consisted of copper (Cu) nanoparticles in varying concentrations with CO2 as a base fluid. No experimental data are available on thermal conductivity of CO2 based nanofluid. Molecular dynamics (MD) simulations are an increasingly adopted tool to perform preliminary assessments of nanoparticle (NP) fluid interactions. In this study, the effect of the formation of a nanolayer (or molecular layering) at the gas-solid interface on thermal conductivity is investigated using equilibrium MD simulations by varying NP diameter and keeping the volume fraction (1.413%) of nanofluid constant to check the diameter effect of NP on the nanolayer and thermal conductivity. A dense semi-solid fluid layer was seen to be formed at the NP-gas interface, and the thickness increases with increase in particle diameter, which also moves with the NP Brownian motion. Density distribution has been done to see the effect of nanolayer, and its thickness around the NP. These findings are extremely beneficial, especially to industries employed in oil recovery as increased thermal conductivity of CO2 will lead to enhanced oil recovery and thermal energy storage.

Keywords: Copper-CO2 nanofluid, molecular interfacial layer, thermal conductivity, molecular dynamic simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1110
217 Applications of Drones in Infrastructures: Challenges and Opportunities

Authors: Jin Fan, M. Ala Saadeghvaziri

Abstract:

Unmanned aerial vehicles (UAVs), also referred to as drones, equipped with various kinds of advanced detecting or surveying systems, are effective and low-cost in data acquisition, data delivery and sharing, which can benefit the building of infrastructures. This paper will give an overview of applications of drones in planning, designing, construction and maintenance of infrastructures. The drone platform, detecting and surveying systems, and post-data processing systems will be introduced, followed by cases with details of the applications. Challenges from different aspects will be addressed. Opportunities of drones in infrastructure include but not limited to the following. Firstly, UAVs equipped with high definition cameras or other detecting equipment are capable of inspecting the hard to reach infrastructure assets. Secondly, UAVs can be used as effective tools to survey and map the landscape to collect necessary information before infrastructure construction. Furthermore, an UAV or multi-UVAs are useful in construction management. UVAs can also be used in collecting roads and building information by taking high-resolution photos for future infrastructure planning. UAVs can be used to provide reliable and dynamic traffic information, which is potentially helpful in building smart cities. The main challenges are: limited flight time, the robustness of signal, post data analyze, multi-drone collaboration, weather condition, distractions to the traffic caused by drones. This paper aims to help owners, designers, engineers and architects to improve the building process of infrastructures for higher efficiency and better performance.

Keywords: Bridge, construction, drones, infrastructure, information.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1307
216 Fuel Economy and Stability Enhancement of the Hybrid Vehicles by Using Electrical Machines on Non-Driven Wheels

Authors: P. Naderi, S.M.T. Bathaee, R. Hoseinnezhad, R. Chini

Abstract:

Using electrical machine in conventional vehicles, also called hybrid vehicles, has become a promising control scheme that enables some manners for fuel economy and driver assist for better stability. In this paper, vehicle stability control, fuel economy and Driving/Regeneration braking for a 4WD hybrid vehicle is investigated by using an electrical machine on each non-driven wheels. In front wheels driven vehicles, fuel economy and regenerative braking can be obtained by summing torques applied on rear wheels. On the other hand, unequal torques applied to rear wheels provides enhanced safety and path correction in steering. In this paper, a model with fourteen degrees of freedom is considered for vehicle body, tires and, suspension systems. Thereafter, powertrain subsystems are modeled. Considering an electrical machine on each rear wheel, a fuzzy controller is designed for each driving, braking, and stability conditions. Another fuzzy controller recognizes the vehicle requirements between the driving/regeneration and stability modes. Intelligent vehicle control to multi objective operation and forward simulation are the paper advantages. For reaching to these aims, power management control and yaw moment control will be done by three fuzzy controllers. Also, the above mentioned goals are weighted by another fuzzy sub-controller base on vehicle dynamic. Finally, Simulations performed in MATLAB/SIMULINK environment show that the proposed structure can enhance the vehicle performance in different modes effectively.

Keywords: Hybrid, pitch, roll, regeneration, yaw.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1874
215 A Theoretical Analysis of Air Cooling System Using Thermal Ejector under Variable Generator Pressure

Authors: Mohamed Ouzzane, Mahmoud Bady

Abstract:

Due to energy and environment context, research is looking for the use of clean and energy efficient system in cooling industry. In this regard, the ejector represents one of the promising solutions. The thermal ejector is a passive component used for thermal compression in refrigeration and cooling systems, usually activated by heat either waste or solar. The present study introduces a theoretical analysis of the cooling system which uses a gas ejector thermal compression. A theoretical model is developed and applied for the design and simulation of the ejector, as well as the whole cooling system. Besides the conservation equations of mass, energy and momentum, the gas dynamic equations, state equations, isentropic relations as well as some appropriate assumptions are applied to simulate the flow and mixing in the ejector. This model coupled with the equations of the other components (condenser, evaporator, pump, and generator) is used to analyze profiles of pressure and velocity (Mach number), as well as evaluation of the cycle cooling capacity. A FORTRAN program is developed to carry out the investigation. Properties of refrigerant R134a are calculated using real gas equations. Among many parameters, it is thought that the generator pressure is the cornerstone in the cycle, and hence considered as the key parameter in this investigation. Results show that the generator pressure has a great effect on the ejector and on the whole cooling system. At high generator pressures, strong shock waves inside the ejector are created, which lead to significant condenser pressure at the ejector exit. Additionally, at higher generator pressures, the designed system can deliver cooling capacity for high condensing pressure (hot season).

Keywords: Air cooling system, refrigeration, thermal ejector, thermal compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 601
214 Rheological and Computational Analysis of Crude Oil Transportation

Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh

Abstract:

Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.

Keywords: Natural surfactant, crude oil, rheology, CFD, viscosity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675
213 Optimal Duty-Cycle Modulation Scheme for Analog-To-Digital Conversion Systems

Authors: G. Sonfack, J. Mbihi, B. Lonla Moffo

Abstract:

This paper presents an optimal duty-cycle modulation (ODCM) scheme for analog-to-digital conversion (ADC) systems. The overall ODCM-Based ADC problem is decoupled into optimal DCM and digital filtering sub-problems, while taking into account constraints of mutual design parameters between the two. Using a set of three lemmas and four morphological theorems, the ODCM sub-problem is modelled as a nonlinear cost function with nonlinear constraints. Then, a weighted least pth norm of the error between ideal and predicted frequency responses is used as a cost function for the digital filtering sub-problem. In addition, MATLAB fmincon and MATLAB iirlnorm tools are used as optimal DCM and least pth norm solvers respectively. Furthermore, the virtual simulation scheme of an overall prototyping ODCM-based ADC system is implemented and well tested with the help of Simulink tool according to relevant set of design data, i.e., 3 KHz of modulating bandwidth, 172 KHz of maximum modulation frequency and 25 MHZ of sampling frequency. Finally, the results obtained and presented show that the ODCM-based ADC achieves under 3 KHz of modulating bandwidth: 57 dBc of SINAD (signal-to-noise and distorsion), 58 dB of SFDR (Surpious free dynamic range) -80 dBc of THD (total harmonic distorsion), and 10 bits of minimum resolution. These performance levels appear to be a great challenge within the class of oversampling ADC topologies, with 2nd order IIR (infinite impulse response) decimation filter.

Keywords: Digital IIR filter, morphological lemmas and theorems, optimal DCM-based DAC, virtual simulation, weighted least pth norm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 933
212 Buildings Founded on Thermal Insulation Layer Subjected to Earthquake Load

Authors: D. Koren, V. Kilar

Abstract:

The modern energy-efficient houses are often founded on a thermal insulation (TI) layer placed under the building’s RC foundation slab.The purpose of the paper is to identify the potential problems of the buildings founded on TI layer from the seismic point of view. The two main goals of the study were to assess the seismic behavior of such buildings, and to search for the critical structural parameters affecting the response of the superstructure as well as of the extruded polystyrene (XPS) layer. As a test building a multi-storeyed RC frame structure with and without the XPS layer under the foundation slab has been investigated utilizing nonlinear dynamic (time-history) and static (pushover) analyses. The structural response has been investigated with reference to the following performance parameters: i) Building’s lateral roof displacements, ii) Edge compressive and shear strains of the XPS, iii) Horizontal accelerations of the superstructure, iv) Plastic hinge patterns of the superstructure, v) Part of the foundation in compression, and vi) Deformations of the underlying soil and vertical displacements of the foundation slab (i.e. identifying the potential uplift). The results have shown that in the case of higher and stiff structures lying on firm soil the use of XPS under the foundation slab might induce amplified structural peak responses compared to the building models without XPS under the foundation slab. The analysis has revealed that the superstructure as well as the XPS response is substantially affected by the stiffness of the foundation slab.

Keywords: Extruded polystyrene (XPS), foundation on thermal insulation, energy-efficient buildings, nonlinear seismic analysis, seismic response, soil–structure interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2229
211 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1016
210 The Use of Information and Communication Technologies in Electoral Procedures: Comments on Electronic Voting Security

Authors: Magdalena Musiał-Karg

Abstract:

The expansion of telecommunication and progress of electronic media constitute important elements of our times. The recent worldwide convergence of information and communication technologies (ICT) and dynamic development of the mass media is leading to noticeable changes in the functioning of contemporary states and societies. Currently, modern technologies play more and more important roles and filter down to almost every field of contemporary human life. It results in the growth of online interactions that can be observed by the inconceivable increase in the number of people with home PCs and Internet access. The proof of it is undoubtedly the emergence and use of concepts such as e-society, e-banking, e-services, e-government, e-government, e-participation and e-democracy. The newly coined word e-democracy evidences that modern technologies have also been widely used in politics. Without any doubt in most countries all actors of political market (politicians, political parties, servants in political/public sector, media) use modern forms of communication with the society. Most of these modern technologies progress the processes of getting and sending information to the citizens, communication with the electorate, and also – which seems to be the biggest advantage – electoral procedures. Thanks to implementation of ICT the interaction between politicians and electorate are improved. The main goal of this text is to analyze electronic voting (e-voting) as one of the important forms of electronic democracy in terms of security aspects. The author of this paper aimed at answering the questions of security of electronic voting as an additional form of participation in elections and referenda.

Keywords: Electronic democracy, electronic participation, electronic voting, security of e-voting, ICT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1068
209 Development of an Ensemble Classification Model Based on Hybrid Filter-Wrapper Feature Selection for Email Phishing Detection

Authors: R. B. Ibrahim, M. S. Argungu, I. M. Mungadi

Abstract:

It is obvious in this present time, internet has become an indispensable part of human life since its inception. The Internet has provided diverse opportunities to make life so easy for human beings, through the adoption of various channels. Among these channels are email, internet banking, video conferencing, and the like. Email is one of the easiest means of communication hugely accepted among individuals and organizations globally. But over decades the security integrity of this platform has been challenged with malicious activities like Phishing. Email phishing is designed by phishers to fool the recipient into handing over sensitive personal information such as passwords, credit card numbers, account credentials, social security numbers, etc. This activity has caused a lot of financial damage to email users globally which has resulted in bankruptcy, sudden death of victims, and other health-related sicknesses. Although many methods have been proposed to detect email phishing, in this research, the results of multiple machine-learning methods for predicting email phishing have been compared with the use of filter-wrapper feature selection. It is worth noting that all three models performed substantially but one outperformed the other. The dataset used for these models is obtained from Kaggle online data repository, while three classifiers: decision tree, Naïve Bayes, and Logistic regression are ensemble (Bagging) respectively. Results from the study show that the Decision Tree (CART) bagging ensemble recorded the highest accuracy of 98.13% using PEF (Phishing Essential Features). This result further demonstrates the dependability of the proposed model.

Keywords: Ensemble, hybrid, filter-wrapper, phishing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178
208 Stereo Motion Tracking

Authors: Yudhajit Datta, Jonathan Bandi, Ankit Sethia, Hamsi Iyer

Abstract:

Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.

Keywords: Kalman Filter, Stereo Vision, Motion Tracking, Matlab, Object Tracking, Camera Calibration, Computer Vision System Toolbox.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2822
207 Perforation Analysis of the Aluminum Alloy Sheets Subjected to High Rate of Loading and Heated Using Thermal Chamber: Experimental and Numerical Approach

Authors: A. Bendarma, T. Jankowiak, A. Rusinek, T. Lodygowski, M. Klósak, S. Bouslikhane

Abstract:

The analysis of the mechanical characteristics and dynamic behavior of aluminum alloy sheet due to perforation tests based on the experimental tests coupled with the numerical simulation is presented. The impact problems (penetration and perforation) of the metallic plates have been of interest for a long time. Experimental, analytical as well as numerical studies have been carried out to analyze in details the perforation process. Based on these approaches, the ballistic properties of the material have been studied. The initial and residual velocities laser sensor is used during experiments to obtain the ballistic curve and the ballistic limit. The energy balance is also reported together with the energy absorbed by the aluminum including the ballistic curve and ballistic limit. The high speed camera helps to estimate the failure time and to calculate the impact force. A wide range of initial impact velocities from 40 up to 180 m/s has been covered during the tests. The mass of the conical nose shaped projectile is 28 g, its diameter is 12 mm, and the thickness of the aluminum sheet is equal to 1.0 mm. The ABAQUS/Explicit finite element code has been used to simulate the perforation processes. The comparison of the ballistic curve was obtained numerically and was verified experimentally, and the failure patterns are presented using the optimal mesh densities which provide the stability of the results. A good agreement of the numerical and experimental results is observed.

Keywords: Aluminum alloy, ballistic behavior, failure criterion, numerical simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 933
206 Analysis of Driver Point of Regard Determinations with Eye-Gesture Templates Using Receiver Operating Characteristic

Authors: Siti Nor Hafizah binti Mohd Zaid, Mohamed Abdel-Maguid, Abdel-Hamid Soliman

Abstract:

An Advance Driver Assistance System (ADAS) is a computer system on board a vehicle which is used to reduce the risk of vehicular accidents by monitoring factors relating to the driver, vehicle and environment and taking some action when a risk is identified. Much work has been done on assessing vehicle and environmental state but there is still comparatively little published work that tackles the problem of driver state. Visual attention is one such driver state. In fact, some researchers claim that lack of attention is the main cause of accidents as factors such as fatigue, alcohol or drug use, distraction and speeding all impair the driver-s capacity to pay attention to the vehicle and road conditions [1]. This seems to imply that the main cause of accidents is inappropriate driver behaviour in cases where the driver is not giving full attention while driving. The work presented in this paper proposes an ADAS system which uses an image based template matching algorithm to detect if a driver is failing to observe particular windscreen cells. This is achieved by dividing the windscreen into 24 uniform cells (4 rows of 6 columns) and matching video images of the driver-s left eye with eye-gesture templates drawn from images of the driver looking at the centre of each windscreen cell. The main contribution of this paper is to assess the accuracy of this approach using Receiver Operating Characteristic analysis. The results of our evaluation give a sensitivity value of 84.3% and a specificity value of 85.0% for the eye-gesture template approach indicating that it may be useful for driver point of regard determinations.

Keywords: Advanced Driver Assistance Systems, Eye-Tracking, Hazard Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632
205 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies

Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey

Abstract:

Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. The world wide observed changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although the effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.

Keywords: Climate Change, Downscaling, GCM, RCM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3375
204 Space Telemetry Anomaly Detection Based on Statistical PCA Algorithm

Authors: B. Nassar, W. Hussein, M. Mokhtar

Abstract:

The critical concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission, but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the problem above coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions, and the results show that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.

Keywords: Space telemetry monitoring, multivariate analysis, PCA algorithm, space operations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2062
203 Parameters Affecting the Elasto-Plastic Behavior of Outrigger Braced Walls to Earthquakes

Authors: T. A. Sakr, Hanaa E. Abd-El- Mottaleb

Abstract:

Outrigger-braced wall systems are commonly used to provide high rise buildings with the required lateral stiffness for wind and earthquake resistance. The existence of outriggers adds to the stiffness and strength of walls as reported by several studies. The effects of different parameters on the elasto-plastic dynamic behavior of outrigger-braced wall systems to earthquakes are investigated in this study. Parameters investigated include outrigger stiffness, concrete strength, and reinforcement arrangement as the main design parameters in wall design. In addition to being significantly affect the wall behavior, such parameters may lead to the change of failure mode and the delay of crack propagation and consequently failure as the wall is excited by earthquakes. Bi-linear stress-strain relation for concrete with limited tensile strength and truss members with bi-linear stress-strain relation for reinforcement were used in the finite element analysis of the problem. The famous earthquake record, El-Centro, 1940 is used in the study. Emphasize was given to the lateral drift, normal stresses and crack pattern as behavior controlling determinants. Results indicated significant effect of the studied parameters such that stiffer outrigger, higher grade concrete and concentrating the reinforcement at wall edges enhance the behavior of the system. Concrete stresses and cracking behavior are too much enhanced while less drift improvements are observed.

Keywords: Structures, High rise, Outrigger, Shear Wall, Earthquake, Nonlinear.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2354
202 Power Production Performance of Different Wave Energy Converters in the Southwestern Black Sea

Authors: Ajab G. Majidi, Bilal Bingölbali, Adem Akpınar

Abstract:

This study aims to investigate the amount of energy (economic wave energy potential) that can be obtained from the existing wave energy converters in the high wave energy potential region of the Black Sea in terms of wave energy potential and their performance at different depths in the region. The data needed for this purpose were obtained using the calibrated nested layered SWAN wave modeling program version 41.01AB, which was forced with Climate Forecast System Reanalysis (CFSR) winds from 1979 to 2009. The wave dataset at a time interval of 2 hours was accumulated for a sub-grid domain for around Karaburun beach in Arnavutkoy, a district of Istanbul city. The annual sea state characteristic matrices for the five different depths along with a vertical line to the coastline were calculated for 31 years. According to the power matrices of different wave energy converter systems and characteristic matrices for each possible installation depth, the probability distribution tables of the specified mean wave period or wave energy period and significant wave height were calculated. Then, by using the relationship between these distribution tables, according to the present wave climate, the energy that the wave energy converter systems at each depth can produce was determined. Thus, the economically feasible potential of the relevant coastal zone was revealed, and the effect of different depths on energy converter systems is presented. The Oceantic at 50, 75 and 100 m depths and Oyster at 5 and 25 m depths presents the best performance. In the 31-year long period 1998 the most and 1989 is the least dynamic year.

Keywords: Annual power production, Black Sea, efficiency, power production performance, wave energy converter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 660
201 Performance of BLDC Motor under Kalman Filter Sensorless Drive

Authors: Yuri Boiko, Ci Lin, Iluju Kiringa, Tet Yeap

Abstract:

The performance of a permanent magnet brushless direct current (BLDC) motor controlled by the Kalman filter based position-sensorless drive is studied in terms of its dependence from the system’s parameters variations. The effects of the system’s parameters changes on the dynamic behavior of state variables are verified. Simulated is the closed loop control scheme with Kalman filter in the feedback line. Distinguished are two separate data sampling modes in analyzing feedback output from the BLDC motor: (1) equal angular separation and (2) equal time intervals. In case (1), the data are collected via equal intervals  of rotor’s angular position i, i.e. keeping  = const. In case (2), the data collection time points ti are separated by equal sampling time intervals t = const. Demonstrated are the effects of the parameters changes on the sensorless control flow, in particular, reduction of the instability torque ripples, switching spikes, and torque load balancing. It is specifically shown that an efficient suppression of commutation induced instability torque ripples is an achievable selection of the sampling rate in the Kalman filter settings above a certain critical value. The computational cost of such suppression is shown to be higher for the motors with lower induction values of the windings.

Keywords: BLDC motor, Kalman filter, sensorless drive, state variables, instability torque ripples reduction, sampling rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
200 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material

Authors: S. Boria

Abstract:

In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.

Keywords: Composite material, crashworthiness, finite element analysis, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1129
199 Medical Imaging Fusion: A Teaching-Learning Simulation Environment

Authors: Cristina M. R. Caridade, Ana Rita F. Morais

Abstract:

The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with health care facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool, developed in MATLAB using Graphical User Interface, for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing to view original images and fusion images, compare processed and original images, adjust parameters and save images. The tool proposed in an innovative teaching and learning environment, consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques, necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides a real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student’s knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies.

Keywords: Image fusion, image processing, teaching-learning simulation tool, biomedical engineering education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13
198 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: Computational fluid dynamics, erosion, slurry transportation, k-ε Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918
197 Linguistic, Pragmatic and Evolutionary Factors in Wason Selection Task

Authors: Olimpia Matarazzo, Fabrizio Ferrara

Abstract:

In two studies we tested the hypothesis that the appropriate linguistic formulation of a deontic rule – i.e. the formulation which clarifies the monadic nature of deontic operators - should produce more correct responses than the conditional formulation in Wason selection task. We tested this assumption by presenting a prescription rule and a prohibition rule in conditional vs. proper deontic formulation. We contrasted this hypothesis with two other hypotheses derived from social contract theory and relevance theory. According to the first theory, a deontic rule expressed in terms of cost-benefit should elicit a cheater detection module, sensible to mental states attributions and thus able to discriminate intentional rule violations from accidental rule violations. We tested this prevision by distinguishing the two types of violations. According to relevance theory, performance in selection task should improve by increasing cognitive effect and decreasing cognitive effort. We tested this prevision by focusing experimental instructions on the rule vs. the action covered by the rule. In study 1, in which 480 undergraduates participated, we tested these predictions through a 2 x 2 x 2 x 2 (type of the rule x rule formulation x type of violation x experimental instructions) between-subjects design. In study 2 – carried out by means of a 2 x 2 (rule formulation x type of violation) between-subjects design - we retested the hypothesis of rule formulation vs. the cheaterdetection hypothesis through a new version of selection task in which intentional vs. accidental rule violations were better discriminated. 240 undergraduates participated in this study. Results corroborate our hypothesis and challenge the contrasting assumptions. However, they show that the conditional formulation of deontic rules produces a lower performance than what is reported in literature.

Keywords: Deontic reasoning; Evolutionary, linguistic, logical, pragmatic factors; Wason selection task

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
196 On the Mathematical Structure and Algorithmic Implementation of Biochemical Network Models

Authors: Paola Lecca

Abstract:

Modeling and simulation of biochemical reactions is of great interest in the context of system biology. The central dogma of this re-emerging area states that it is system dynamics and organizing principles of complex biological phenomena that give rise to functioning and function of cells. Cell functions, such as growth, division, differentiation and apoptosis are temporal processes, that can be understood if they are treated as dynamic systems. System biology focuses on an understanding of functional activity from a system-wide perspective and, consequently, it is defined by two hey questions: (i) how do the components within a cell interact, so as to bring about its structure and functioning? (ii) How do cells interact, so as to develop and maintain higher levels of organization and functions? In recent years, wet-lab biologists embraced mathematical modeling and simulation as two essential means toward answering the above questions. The credo of dynamics system theory is that the behavior of a biological system is given by the temporal evolution of its state. Our understanding of the time behavior of a biological system can be measured by the extent to which a simulation mimics the real behavior of that system. Deviations of a simulation indicate either limitations or errors in our knowledge. The aim of this paper is to summarize and review the main conceptual frameworks in which models of biochemical networks can be developed. In particular, we review the stochastic molecular modelling approaches, by reporting the principal conceptualizations suggested by A. A. Markov, P. Langevin, A. Fokker, M. Planck, D. T. Gillespie, N. G. van Kampfen, and recently by D. Wilkinson, O. Wolkenhauer, P. S. Jöberg and by the author.

Keywords: Mathematical structure, algorithmic implementation, biochemical network models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557
195 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery

Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko

Abstract:

In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analyzed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realized via a twoway coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary Lagrangian-Eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analyzed in the study. The axial velocity at normalized position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.

Keywords: Large Eddy Simulation, Fluid Structural Interaction, Constricted Artery, Computational Fluid Dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2343
194 A Three Elements Vector Valued Structure’s Ultimate Strength-Strong Motion-Intensity Measure

Authors: A. Nicknam, N. Eftekhari, A. Mazarei, M. Ganjvar

Abstract:

This article presents an alternative collapse capacity intensity measure in the three elements form which is influenced by the spectral ordinates at periods longer than that of the first mode period at near and far source sites. A parameter, denoted by β, is defined by which the spectral ordinate effects, up to the effective period (2T1), on the intensity measure are taken into account. The methodology permits to meet the hazard-levelled target extreme event in the probabilistic and deterministic forms. A MATLAB code is developed involving OpenSees to calculate the collapse capacities of the 8 archetype RC structures having 2 to 20 stories for regression process. The incremental dynamic analysis (IDA) method is used to calculate the structure’s collapse values accounting for the element stiffness and strength deterioration. The general near field set presented by FEMA is used in a series of performing nonlinear analyses. 8 linear relationships are developed for the 8structutres leading to the correlation coefficient up to 0.93. A collapse capacity near field prediction equation is developed taking into account the results of regression processes obtained from the 8 structures. The proposed prediction equation is validated against a set of actual near field records leading to a good agreement. Implementation of the proposed equation to the four archetype RC structures demonstrated different collapse capacities at near field site compared to those of FEMA. The reasons of differences are believed to be due to accounting for the spectral shape effects.

Keywords: Collapse capacity, fragility analysis, spectral shape effects, IDA method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794
193 Progressive AAM Based Robust Face Alignment

Authors: Daehwan Kim, Jaemin Kim, Seongwon Cho, Yongsuk Jang, Sun-Tae Chung, Boo-Gyoun Kim

Abstract:

AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In case the initial values are a little far distant from the global optimum values, there exists a pretty good possibility that AAM-based face alignment may converge to a local minimum. In this paper, we propose a progressive AAM-based face alignment algorithm which first finds the feature parameter vector fitting the inner facial feature points of the face and later localize the feature points of the whole face using the first information. The proposed progressive AAM-based face alignment algorithm utilizes the fact that the feature points of the inner part of the face are less variant and less affected by the background surrounding the face than those of the outer part (like the chin contour). The proposed algorithm consists of two stages: modeling and relation derivation stage and fitting stage. Modeling and relation derivation stage first needs to construct two AAM models: the inner face AAM model and the whole face AAM model and then derive relation matrix between the inner face AAM parameter vector and the whole face AAM model parameter vector. In the fitting stage, the proposed algorithm aligns face progressively through two phases. In the first phase, the proposed algorithm will find the feature parameter vector fitting the inner facial AAM model into a new input face image, and then in the second phase it localizes the whole facial feature points of the new input face image based on the whole face AAM model using the initial parameter vector estimated from using the inner feature parameter vector obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment algorithm is more robust with respect to pose, illumination, and face background than the conventional basic AAM-based face alignment algorithm.

Keywords: Face Alignment, AAM, facial feature detection, model matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639
192 A Study of Priority Evaluation and Resource Allocation for Revitalization of Cultural Heritages in the Urban Development

Authors: Wann-Ming Wey, Yi-Chih Huang

Abstract:

Proper maintenance and preservation of significant cultural heritages or historic buildings is necessary. It can not only enhance environmental benefits and a sense of community, but also preserve a city's history and people’s memory. It allows the next generation to be able to get a glimpse of our past, and achieve the goal of sustainable preserved cultural assets. However, the management of maintenance work has not been appropriate for many designated heritages or historic buildings so far. The planning and implementation of the reuse has yet to have a breakthrough specification. It leads the heritages to a mere formality of being “reserved”, instead of the real meaning of “conservation”. For the restoration and preservation of cultural heritages study issues, it is very important due to the consideration of historical significance, symbolism, and economic benefits effects. However, the decision makers such as the officials from public sector they often encounter which heritage should be prioritized to be restored first under the available limited budgets. Only very few techniques are available today to determine the appropriately restoration priorities for the diverse historical heritages, perhaps because of a lack of systematized decision-making aids been proposed before. In the past, the discussions of management and maintenance towards cultural assets were limited to the selection of reuse alternatives instead of the allocation of resources. In view of this, this research will adopt some integrated research methods to solve the existing problems that decision-makers might encounter when allocating resources in the management and maintenance of heritages and historic buildings.

The purpose of this study is to develop a sustainable decision making model for local governments to resolve these problems. We propose an alternative decision support model to prioritize restoration needs within the limited budgets. The model is constructed based on fuzzy Delphi, fuzzy analysis network process (FANP) and goal programming (GP) methods. In order to avoid misallocate resources; this research proposes a precise procedure that can take multi-stakeholders views, limited costs and resources into consideration. Also, the combination of many factors and goals has been taken into account to find the highest priority and feasible solution results. To illustrate the approach we propose in this research, seven cultural heritages in Taipei city as one example has been used as an empirical study, and the results are in depth analyzed to explain the application of our proposed approach.

Keywords: Cultural Heritage, Historic Buildings, Priority Evaluation, Multi-Criteria Decision Making, Goal Programming, Fuzzy Analytic Network Process, Resource Allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2312
191 Evaluating Probable Bending of Frames for Near-Field and Far-Field Records

Authors: Majid Saaly, Shahriar Tavousi Tafreshi, Mehdi Nazari Afshar

Abstract:

Most reinforced concrete structures are designed only under heavy loads have large transverse reinforcement spacing values, and therefore suffer severe failure after intense ground movements. The main goal of this paper is to compare the shear- and axial failure of concrete bending frames available in Tehran using Incremental Dynamic Analysis (IDA) under near- and far-field records. For this purpose, IDA of 5, 10, and 15-story concrete structures were done under seven far-fault records and five near-faults records. The results show that in two-dimensional models of short-rise, mid-rise and high-rise reinforced concrete frames located on Type-3 soil, increasing the distance of the transverse reinforcement can increase the maximum inter-story drift ratio values up to 37%. According to the existing results on 5, 10, and 15-story reinforced concrete models located on Type-3 soil, records with characteristics such as fling-step and directivity create maximum drift values between floors more than far-fault earthquakes. The results indicated that in the case of seismic excitation modes under earthquake encompassing directivity or fling-step, the probability values of failure and failure possibility increasing rate values are much smaller than the corresponding values of far-fault earthquakes. However, in near-fault frame records, the probability of exceedance occurs at lower seismic intensities compared to far-fault records.

Keywords: Directivity, fling-step, fragility curve, IDA, inter story drift ratio.v

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 364
190 FACTS Based Stabilization for Smart Grid Applications

Authors: Adel M. Sharaf, Foad H. Gandoman

Abstract:

Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PVhybrid- Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid- Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6- pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.

Keywords: AC FACTS, Smart grid, Stabilization, PV-Battery Storage, Switched Filter-Compensation (SFC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3247
189 Data Privacy and Safety with Large Language Models

Authors: Ashly Joseph, Jithu Paulose

Abstract:

Large language models (LLMs) have revolutionized natural language processing capabilities, enabling applications such as chatbots, dialogue agents, image, and video generators. Nevertheless, their trainings on extensive datasets comprising personal information poses notable privacy and safety hazards. This study examines methods for addressing these challenges, specifically focusing on approaches to enhance the security of LLM outputs, safeguard user privacy, and adhere to data protection rules. We explore several methods including post-processing detection algorithms, content filtering, reinforcement learning from human and AI inputs, and the difficulties in maintaining a balance between model safety and performance. The study also emphasizes the dangers of unintentional data leakage, privacy issues related to user prompts, and the possibility of data breaches. We highlight the significance of corporate data governance rules and optimal methods for engaging with chatbots. In addition, we analyze the development of data protection frameworks, evaluate the adherence of LLMs to General Data Protection Regulation (GDPR), and examine privacy legislation in academic and business policies. We demonstrate the difficulties and remedies involved in preserving data privacy and security in the age of sophisticated artificial intelligence by employing case studies and real-life instances. This article seeks to educate stakeholders on practical strategies for improving the security and privacy of LLMs, while also assuring their responsible and ethical implementation.

Keywords: Data privacy, large language models, artificial intelligence, machine learning, cybersecurity, general data protection regulation, data safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 104