Search results for: Kernel Method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18737

Search results for: Kernel Method

14207 Experimental Analysis of Structure Borne Noise in an Enclosure

Authors: Waziralilah N. Fathiah, A. Aminudin, U. Alyaa Hashim, T. Vikneshvaran D. Shakirah Shukor

Abstract:

This paper presents the experimental analysis conducted on a structure borne noise in a rectangular enclosure prototype made by joining of sheet aluminum metal and plywood. The study is significant as many did not realized the annoyance caused by structural borne-noise. In this study, modal analysis is carried out to seek the structure’s behaviour in order to identify the characteristics of enclosure in frequency domain ranging from 0 Hz to 200 Hz. Here, numbers of modes are identified and the characteristic of mode shape is categorized. Modal experiment is used to diagnose the structural behaviour while microphone is used to diagnose the sound. Spectral testing is performed on the enclosure. It is acoustically excited using shaker and as it vibrates, the vibrational and noise responses sensed by tri-axis accelerometer and microphone sensors are recorded respectively. Experimental works is performed on each node lies on the gridded surface of the enclosure. Both experimental measurement is carried out simultaneously. The modal experimental results of the modal modes are validated by simulation performed using MSC Nastran software. In pursuance of reducing the structure borne-noise, mitigation method is used whereby the stiffener plates are perpendicularly placed on the sheet aluminum metal. By using this method, reduction in structure borne-noise is successfully made at the end of the study.

Keywords: enclosure, modal analysis, sound analysis, structure borne-noise

Procedia PDF Downloads 413
14206 Increasing Health Education Tools Satisfaction in Nursing Staffs

Authors: Lu Yu Jyun

Abstract:

Background: Health education is important nursing work aiming to strengthen patients’ self-caring ability and family members. Our department educates through three methods, including speech education, flyer and demonstration video education. The satisfaction rate of health education tool use is 54.3% in nursing staff. The main reason is there hadn’t been a storage area for flyers, causing extra workload in assessing flyers. The satisfaction rate of health education in patients and families is 70.7%. We aim to improve this situation between 13th April and 6th June 2021. Method: We introduce the ECRS method to erase repetitive and redundant actions. We redesign the health education tool usage workflow to improve nursing staffs’ efficiency and further enhance nursing staffs care quality and working satisfaction. Result: The satisfaction rate of health education tool usage in nursing staff elevated from 54.3% to 92.5%. The satisfaction rate of health education in patients and families elevated from 70.7% to 90.2%. Conclusion: The assessment time of health care tools dropped from 10minutes to 3minutes. This significantly reduced the nursing staffs’ workload. 1213 paper is saved in one month and 14,556 a year in the estimate; we save the environment via this action. Health education map implemented in other nursing departments since October due to its’ high efficiency and makes health care tools more humanize.

Keywords: health, education tools, satisfaction, nursing staff

Procedia PDF Downloads 129
14205 Dynamic Distribution Calibration for Improved Few-Shot Image Classification

Authors: Majid Habib Khan, Jinwei Zhao, Xinhong Hei, Liu Jiedong, Rana Shahzad Noor, Muhammad Imran

Abstract:

Deep learning is increasingly employed in image classification, yet the scarcity and high cost of labeled data for training remain a challenge. Limited samples often lead to overfitting due to biased sample distribution. This paper introduces a dynamic distribution calibration method for few-shot learning. Initially, base and new class samples undergo normalization to mitigate disparate feature magnitudes. A pre-trained model then extracts feature vectors from both classes. The method dynamically selects distribution characteristics from base classes (both adjacent and remote) in the embedding space, using a threshold value approach for new class samples. Given the propensity of similar classes to share feature distributions like mean and variance, this research assumes a Gaussian distribution for feature vectors. Subsequently, distributional features of new class samples are calibrated using a corrected hyperparameter, derived from the distribution features of both adjacent and distant base classes. This calibration augments the new class sample set. The technique demonstrates significant improvements, with up to 4% accuracy gains in few-shot classification challenges, as evidenced by tests on miniImagenet and CUB datasets.

Keywords: deep learning, computer vision, image classification, few-shot learning, threshold

Procedia PDF Downloads 44
14204 3D Modeling for Frequency and Time-Domain Airborne EM Systems with Topography

Authors: C. Yin, B. Zhang, Y. Liu, J. Cai

Abstract:

Airborne EM (AEM) is an effective geophysical exploration tool, especially suitable for ridged mountain areas. In these areas, topography will have serious effects on AEM system responses. However, until now little study has been reported on topographic effect on airborne EM systems. In this paper, an edge-based unstructured finite-element (FE) method is developed for 3D topographic modeling for both frequency and time-domain airborne EM systems. Starting from the frequency-domain Maxwell equations, a vector Helmholtz equation is derived to obtain a stable and accurate solution. Considering that the AEM transmitter and receiver are both located in the air, the scattered field method is used in our modeling. The Galerkin method is applied to discretize the Helmholtz equation for the final FE equations. Solving the FE equations, the frequency-domain AEM responses are obtained. To accelerate the calculation speed, the response of source in free-space is used as the primary field and the PARDISO direct solver is used to deal with the problem with multiple transmitting sources. After calculating the frequency-domain AEM responses, a Hankel’s transform is applied to obtain the time-domain AEM responses. To check the accuracy of present algorithm and to analyze the characteristic of topographic effect on airborne EM systems, both the frequency- and time-domain AEM responses for 3 model groups are simulated: 1) a flat half-space model that has a semi-analytical solution of EM response; 2) a valley or hill earth model; 3) a valley or hill earth with an abnormal body embedded. Numerical experiments show that close to the node points of the topography, AEM responses demonstrate sharp changes. Special attentions need to be paid to the topographic effects when interpreting AEM survey data over rugged topographic areas. Besides, the profile of the AEM responses presents a mirror relation with the topographic earth surface. In comparison to the topographic effect that mainly occurs at the high-frequency end and early time channels, the EM responses of underground conductors mainly occur at low frequencies and later time channels. For the signal of the same time channel, the dB/dt field reflects the change of conductivity better than the B-field. The research of this paper will serve airborne EM in the identification and correction of the topographic effects.

Keywords: 3D, Airborne EM, forward modeling, topographic effect

Procedia PDF Downloads 296
14203 Numerical Simulation of Three-Dimensional Cavitating Turbulent Flow in Francis Turbines with ANSYS

Authors: Raza Abdulla Saeed

Abstract:

In this study, the three-dimensional cavitating turbulent flow in a complete Francis turbine is simulated using mixture model for cavity/liquid two-phase flows. Numerical analysis is carried out using ANSYS CFX software release 12, and standard k-ε turbulence model is adopted for this analysis. The computational fluid domain consist of spiral casing, stay vanes, guide vanes, runner and draft tube. The computational domain is discretized with a three-dimensional mesh system of unstructured tetrahedron mesh. The finite volume method (FVM) is used to solve the governing equations of the mixture model. Results of cavitation on the runner’s blades under three different boundary conditions are presented and discussed. From the numerical results it has been found that the numerical method was successfully applied to simulate the cavitating two-phase turbulent flow through a Francis turbine, and also cavitation is clearly predicted in the form of water vapor formation inside the turbine. By comparison the numerical prediction results with a real runner; it’s shown that the region of higher volume fraction obtained by simulation is consistent with the region of runner cavitation damage.

Keywords: computational fluid dynamics, hydraulic francis turbine, numerical simulation, two-phase mixture cavitation model

Procedia PDF Downloads 536
14202 Compression Index Estimation by Water Content and Liquid Limit and Void Ratio Using Statistics Method

Authors: Lizhou Chen, Abdelhamid Belgaid, Assem Elsayed, Xiaoming Yang

Abstract:

Compression index is essential in foundation settlement calculation. The traditional method for determining compression index is consolidation test which is expensive and time consuming. Many researchers have used regression methods to develop empirical equations for predicting compression index from soil properties. Based on a large number of compression index data collected from consolidation tests, the accuracy of some popularly empirical equations were assessed. It was found that primary compression index is significantly overestimated in some equations while it is underestimated in others. The sensitivity analyses of soil parameters including water content, liquid limit and void ratio were performed. The results indicate that the compression index obtained from void ratio is most accurate. The ANOVA (analysis of variance) demonstrates that the equations with multiple soil parameters cannot provide better predictions than the equations with single soil parameter. In other words, it is not necessary to develop the relationships between compression index and multiple soil parameters. Meanwhile, it was noted that secondary compression index is approximately 0.7-5.0% of primary compression index with an average of 2.0%. In the end, the proposed prediction equations using power regression technique were provided that can provide more accurate predictions than those from existing equations.

Keywords: compression index, clay, settlement, consolidation, secondary compression index, soil parameter

Procedia PDF Downloads 143
14201 Triangulations via Iterated Largest Angle Bisection

Authors: Yeonjune Kang

Abstract:

A triangulation of a planar region is a partition of that region into triangles. In the finite element method, triangulations are often used as the grid underlying a computation. In order to be suitable as a finite element mesh, a triangulation must have well-shaped triangles, according to criteria that depend on the details of the particular problem. For instance, most methods require that all triangles be small and as close to the equilateral shape as possible. Stated differently, one wants to avoid having either thin or flat triangles in the triangulation. There are many triangulation procedures, a particular one being the one known as the longest edge bisection algorithm described below. Starting with a given triangle, locate the midpoint of the longest edge and join it to the opposite vertex of the triangle. Two smaller triangles are formed; apply the same bisection procedure to each of these triangles. Continuing in this manner after n steps one obtains a triangulation of the initial triangle into 2n smaller triangles. The longest edge algorithm was first considered in the late 70’s. It was shown by various authors that this triangulation has the desirable properties for the finite element method: independently of the number of iterations the angles of these triangles cannot get too small; moreover, the size of the triangles decays exponentially. In the present paper we consider a related triangulation algorithm we refer to as the largest angle bisection procedure. As the name suggests, rather than bisecting the longest edge, at each step we bisect the largest angle. We study the properties of the resulting triangulation and prove that, while the general behavior resembles the one in the longest edge bisection algorithm, there are several notable differences as well.

Keywords: angle bisectors, geometry, triangulation, applied mathematics

Procedia PDF Downloads 373
14200 Traditional Drawing, BIM and Erudite Design Process

Authors: Maryam Kalkatechi

Abstract:

Nowadays, parametric design, scientific analysis, and digital fabrication are dominant. Many architectural practices are increasingly seeking to incorporate advanced digital software and fabrication in their projects. Proposing an erudite design process that combines digital and practical aspects in a strong frame within the method was resulted from the dissertation research. The digital aspects are the progressive advancements in algorithm design and simulation software. These aspects have assisted the firms to develop more holistic concepts at the early stage and maintain collaboration among disciplines during the design process. The erudite design process enhances the current design processes by encouraging the designer to implement the construction and architecture knowledge within the algorithm to make successful design processes. The erudite design process also involves the ongoing improvements of applying the new method of 3D printing in construction. This is achieved through the ‘data-sketches’. The term ‘data-sketch’ was developed by the author in the dissertation that was recently completed. It accommodates the decisions of the architect on the algorithm. This paper introduces the erudite design process and its components. It will summarize the application of this process in development of the ‘3D printed construction unit’. This paper contributes to overlaying the academic and practice with advanced technology by presenting a design process that transfers the dominance of tool to the learned architect and encourages innovation in design processes.

Keywords: erudite, data-sketch, algorithm design in architecture, design process

Procedia PDF Downloads 256
14199 A Simple Computational Method for the Gravitational and Seismic Soil-Structure-Interaction between New and Existent Buildings Sites

Authors: Nicolae Daniel Stoica, Ion Mierlus Mazilu

Abstract:

This work is one of numerical research and aims to address the issue of the design of new buildings in a 3D location of existing buildings. In today's continuous development and congestion of urban centers is a big question about the influence of the new buildings on an already existent vicinity site. Thus, in this study, we tried to focus on how existent buildings may be affected by any newly constructed buildings and in how far this influence is really decreased. The problem of modeling the influence of interaction between buildings is not simple in any area in the world, and neither in Romania. Unfortunately, most often the designers not done calculations that can determine how close to reality these 3D influences nor the simplified method and the more superior methods. In the most literature making a "shield" (the pilots or molded walls) is absolutely sufficient to stop the influence between the buildings, and so often the soil under the structure is ignored in the calculation models. The main causes for which the soil is neglected in the analysis are related to the complexity modeling of interaction between soil and structure. In this paper, based on a new simple but efficient methodology we tried to determine for a lot of study cases the influence, in terms of assessing the interaction land structure on the behavior of structures that influence a new building on an existing one. The study covers additional subsidence that may occur during the execution of new works and after its completion. It also highlighted the efforts diagrams and deflections in the soil for both the original case and the final stage. This is necessary to see to what extent the expected impact of the new building on existing areas.

Keywords: soil, structure, interaction, piles, earthquakes

Procedia PDF Downloads 273
14198 Study of Climate Change Process on Hyrcanian Forests Using Dendroclimatology Indicators (Case Study of Guilan Province)

Authors: Farzad Shirzad, Bohlol Alijani, Mehry Akbary, Mohammad Saligheh

Abstract:

Climate change and global warming are very important issues today. The process of climate change, especially changes in temperature and precipitation, is the most important issue in the environmental sciences. Climate change means changing the averages in the long run. Iran is located in arid and semi-arid regions due to its proximity to the equator and its location in the subtropical high pressure zone. In this respect, the Hyrcanian forest is a green necklace between the Caspian Sea and the south of the Alborz mountain range. In the forty-third session of UNESCO, it was registered as the second natural heritage of Iran. Beech is one of the most important tree species and the most industrial species of Hyrcanian forests. In this research, using dendroclimatology, the width of the tree ring, and climatic data of temperature and precipitation from Shanderman meteorological station located in the study area, And non-parametric Mann-Kendall statistical method to investigate the trend of climate change over a time series of 202 years of growth ringsAnd Pearson statistical method was used to correlate the growth of "ring" growth rings of beech trees with climatic variables in the region. The results obtained from the time series of beech growth rings showed that the changes in beech growth rings had a downward and negative trend and were significant at the level of 5% and climate change occurred. The average minimum, medium, and maximum temperatures and evaporation in the growing season had an increasing trend, and the annual precipitation had a decreasing trend. Using Pearson method during fitting the correlation of diameter of growth rings with temperature, for the average in July, August, and September, the correlation is negative, and the average temperature in July, August, and September is negative, and for the average The average maximum temperature in February was correlation-positive and at the level of 95% was significant, and with precipitation, in June the correlation was at the level of 95% positive and significant.

Keywords: climate change, dendroclimatology, hyrcanian forest, beech

Procedia PDF Downloads 85
14197 Nostalgic Tourism in Macau: The Bidirectional Causal Relationship between Destination Image and Experiential Value

Authors: Aliana Leong, T. C. Huan

Abstract:

The purpose of Nostalgic themed tourism product is becoming popular in many countries. This study intends to investigate the role of nostalgia in destination image, experiential value and their effect on subsequent behavioral intention. The survey used stratified sampling method to include respondents from all the nearby Asian regions. The sampling is based on the data of inbound tourists provided by the Statistics and Census Service (DSEC) of government of Macau. The questionnaire consisted of five sections of 5 point Likert scale questions: (1) nostalgia, (2) destination image both before and after experience, (3) expected value, (4) experiential value, and (5) future visit intention. Data was analysed with structural equation modelling. The result indicates that nostalgia plays an important part in forming destination image and experiential value before individual had a chance to experience the destination. The destination image and experiential value share a bidirectional causal relationship that eventually contributes to future visit intention. The study also discovered that while experiential value is more effective in generating destination image, the later contribute more to future visit intention. The research design measures destination image and experiential value before and after respondents had experience the destination. The distinction between destination image and expected/experiential value can be examined because the longitudinal design of research method. It also allows this study to observe how nostalgia translates to future visit intention.

Keywords: nostalgia, destination image, experiential value, future visit intention

Procedia PDF Downloads 378
14196 Optimization of Cutting Parameters on Delamination Using Taguchi Method during Drilling of GFRP Composites

Authors: Vimanyu Chadha, Ranganath M. Singari

Abstract:

Drilling composite materials is a frequently practiced machining process during assembling in various industries such as automotive and aerospace. However, drilling of glass fiber reinforced plastic (GFRP) composites is significantly affected by damage tendency of these materials under cutting forces such as thrust force and torque. The aim of this paper is to investigate the influence of the various cutting parameters such as cutting speed and feed rate; subsequently also to study the influence of number of layers on delamination produced while drilling a GFRP composite. A plan of experiments, based on Taguchi techniques, was instituted considering drilling with prefixed cutting parameters in a hand lay-up GFRP material. The damage induced associated with drilling GFRP composites were measured. Moreover, Analysis of Variance (ANOVA) was performed to obtain minimization of delamination influenced by drilling parameters and number layers. The optimum drilling factor combination was obtained by using the analysis of signal-to-noise ratio. The conclusion revealed that feed rate was the most influential factor on the delamination. The best results of the delamination were obtained with composites with a greater number of layers at lower cutting speeds and feed rates.

Keywords: analysis of variance, delamination, design optimization, drilling, glass fiber reinforced plastic composites, Taguchi method

Procedia PDF Downloads 238
14195 A Machine Learning Framework Based on Biometric Measurements for Automatic Fetal Head Anomalies Diagnosis in Ultrasound Images

Authors: Hanene Sahli, Aymen Mouelhi, Marwa Hajji, Amine Ben Slama, Mounir Sayadi, Farhat Fnaiech, Radhwane Rachdi

Abstract:

Fetal abnormality is still a public health problem of interest to both mother and baby. Head defect is one of the most high-risk fetal deformities. Fetal head categorization is a sensitive task that needs a massive attention from neurological experts. In this sense, biometrical measurements can be extracted by gynecologist doctors and compared with ground truth charts to identify normal or abnormal growth. The fetal head biometric measurements such as Biparietal Diameter (BPD), Occipito-Frontal Diameter (OFD) and Head Circumference (HC) needs to be monitored, and expert should carry out its manual delineations. This work proposes a new approach to automatically compute BPD, OFD and HC based on morphological characteristics extracted from head shape. Hence, the studied data selected at the same Gestational Age (GA) from the fetal Ultrasound images (US) are classified into two categories: Normal and abnormal. The abnormal subjects include hydrocephalus, microcephaly and dolichocephaly anomalies. By the use of a support vector machines (SVM) method, this study achieved high classification for automated detection of anomalies. The proposed method is promising although it doesn't need expert interventions.

Keywords: biometric measurements, fetal head malformations, machine learning methods, US images

Procedia PDF Downloads 272
14194 Numerical Investigation of Turbulent Flow Control by Suction and Injection on a Subsonic NACA23012 Airfoil by Proper Orthogonal Decomposition Analysis and Perturbed Reynolds Averaged Navier‐Stokes Equations

Authors: Azam Zare

Abstract:

Separation flow control for performance enhancement over airfoils at high incidence angle has become an increasingly important topic. This work details the characteristics of an efficient feedback control of the turbulent subsonic flow over NACA23012 airfoil using forced reduced‐order model based on the proper orthogonal decomposition/Galerkin projection and perturbation method on the compressible Reynolds Averaged Navier‐Stokes equations. The forced reduced‐order model is used in the optimal control of the turbulent separated flow over a NACA23012 airfoil at Mach number of 0.2, Reynolds number of 5×106, and high incidence angle of 24° using blowing/suction controlling jets. The Spallart-Almaras turbulence model is implemented for high Reynolds number calculations. The main shortcoming of the POD/Galerkin projection on flow equations for controlling purposes is that the blowing/suction controlling jet velocity does not show up explicitly in the resulting reduced order model. Combining perturbation method and POD/Galerkin projection on flow equations introduce a forced reduced‐order model that can predict the time-varying influence of the blowing/suction controlling jet velocity. An optimal control theory based on forced reduced‐order system is used to design a control law for a nonlinear reduced‐order model, which attempts to minimize the vorticity content in the turbulent flow field over NACA23012 airfoil. Numerical simulations were performed to help understand the behavior of the controlled suction jet at 12% to 18% chord from leading edge and a pair of blowing/suction jets at 15% to 18% and 24% to 30% chord from leading edge, respectively. Analysis of streamline profiles indicates that the blowing/suction jets are efficient in removing separation bubbles and increasing the lift coefficient up to 22%, while the perturbation method can predict the flow field in an accurate Manner.

Keywords: flow control, POD, Galerkin projection, separation

Procedia PDF Downloads 138
14193 A Neural Network Approach to Understanding Turbulent Jet Formations

Authors: Nurul Bin Ibrahim

Abstract:

Advancements in neural networks have offered valuable insights into Fluid Dynamics, notably in addressing turbulence-related challenges. In this research, we introduce multiple applications of models of neural networks, namely Feed-Forward and Recurrent Neural Networks, to explore the relationship between jet formations and stratified turbulence within stochastically excited Boussinesq systems. Using machine learning tools like TensorFlow and PyTorch, the study has created models that effectively mimic and show the underlying features of the complex patterns of jet formation and stratified turbulence. These models do more than just help us understand these patterns; they also offer a faster way to solve problems in stochastic systems, improving upon traditional numerical techniques to solve stochastic differential equations such as the Euler-Maruyama method. In addition, the research includes a thorough comparison with the Statistical State Dynamics (SSD) approach, which is a well-established method for studying chaotic systems. This comparison helps evaluate how well neural networks can help us understand the complex relationship between jet formations and stratified turbulence. The results of this study underscore the potential of neural networks in computational physics and fluid dynamics, opening up new possibilities for more efficient and accurate simulations in these fields.

Keywords: neural networks, machine learning, computational fluid dynamics, stochastic systems, simulation, stratified turbulence

Procedia PDF Downloads 52
14192 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers

Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta

Abstract:

The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.

Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation

Procedia PDF Downloads 45
14191 Finite Volume Method Simulations of GaN Growth Process in MOVPE Reactor

Authors: J. Skibinski, P. Caban, T. Wejrzanowski, K. J. Kurzydlowski

Abstract:

In the present study, numerical simulations of heat and mass transfer during gallium nitride growth process in Metal Organic Vapor Phase Epitaxy reactor AIX-200/4RF-S is addressed. Existing knowledge about phenomena occurring in the MOVPE process allows to produce high quality nitride based semiconductors. However, process parameters of MOVPE reactors can vary in certain ranges. Main goal of this study is optimization of the process and improvement of the quality of obtained crystal. In order to investigate this subject a series of computer simulations have been performed. Numerical simulations of heat and mass transfer in GaN epitaxial growth process have been performed to determine growth rate for various mass flow rates and pressures of reagents. According to the fact that it’s impossible to determine experimentally the exact distribution of heat and mass transfer inside the reactor during the process, modeling is the only solution to understand the process precisely. Main heat transfer mechanisms during MOVPE process are convection and radiation. Correlation of modeling results with the experiment allows to determine optimal process parameters for obtaining crystals of highest quality.

Keywords: Finite Volume Method, semiconductors, epitaxial growth, metalorganic vapor phase epitaxy, gallium nitride

Procedia PDF Downloads 378
14190 Penalization of Transnational Crimes in the Domestic Legal Order: The Case of Poland

Authors: Magda Olesiuk-Okomska

Abstract:

The degree of international interdependence has grown significantly. Poland is a party to nearly 1000 binding multilateral treaties, including international legal instruments devoted to criminal matters and obliging the state to penalize certain crimes. The paper presents results of a theoretical research conducted as a part of doctoral research. The main hypothesis assumed that there was a separate category of crimes to penalization of which Poland was obliged under international legal instruments; that a catalogue of such crimes and a catalogue of international legal instruments providing for Poland’s international obligations had never been compiled in the domestic doctrine, thus there was no mechanism for monitoring implementation of such obligations. In the course of the research, a definition of transnational crimes was discussed and confronted with notions of international crimes, treaty crimes, as well as cross-border crimes. A list of transnational crimes penalized in the Polish Penal Code as well as in non-code criminal law regulations was compiled; international legal instruments, obliging Poland to criminalize and penalize specific conduct, were enumerated and catalogued. It enabled the determination whether Poland’s international obligations were implemented in domestic legislation, as well as the formulation of de lege lata and de lege ferenda postulates. Implemented research methods included inter alia a dogmatic and legal method, an analytical method and desk research.

Keywords: international criminal law, transnational crimes, transnational criminal law, treaty crimes

Procedia PDF Downloads 208
14189 Evaluation of Batch Splitting in the Context of Load Scattering

Authors: S. Wesebaum, S. Willeke

Abstract:

Production companies are faced with an increasingly turbulent business environment, which demands very high production volumes- and delivery date flexibility. If a decoupling by storage stages is not possible (e.g. at a contract manufacturing company) or undesirable from a logistical point of view, load scattering effects the production processes. ‘Load’ characterizes timing and quantity incidence of production orders (e.g. in work content hours) to workstations in the production, which results in specific capacity requirements. Insufficient coordination between load (demand capacity) and capacity supply results in heavy load scattering, which can be described by deviations and uncertainties in the input behavior of a capacity unit. In order to respond to fluctuating loads, companies try to implement consistent and realizable input behavior using the capacity supply available. For example, a uniform and high level of equipment capacity utilization keeps production costs down. In contrast, strong load scattering at workstations leads to performance loss or disproportionately fluctuating WIP, whereby the logistics objectives are affected negatively. Options for reducing load scattering are e.g. shifting the start and end dates of orders, batch splitting and outsourcing of operations or shifting to other workstations. This leads to an adjustment of load to capacity supply, and thus to a reduction of load scattering. If the adaptation of load to capacity cannot be satisfied completely, possibly flexible capacity must be used to ensure that the performance of a workstation does not decrease for a given load. Where the use of flexible capacities normally raises costs, an adjustment of load to capacity supply reduces load scattering and, in consequence, costs. In the literature you mostly find qualitative statements for describing load scattering. Quantitative evaluation methods that describe load mathematically are rare. In this article the authors discuss existing approaches for calculating load scattering and their various disadvantages such as lack of opportunity for normalization. These approaches are the basis for the development of our mathematical quantification approach for describing load scattering that compensates the disadvantages of the current quantification approaches. After presenting our mathematical quantification approach, the method of batch splitting will be described. Batch splitting allows the adaptation of load to capacity to reduce load scattering. After describing the method, it will be explicitly analyzed in the context of the logistic curve theory by Nyhuis using the stretch factor α1 in order to evaluate the impact of the method of batch splitting on load scattering and on logistic curves. The conclusion of this article will be to show how the methods and approaches presented can help companies in a turbulent environment to quantify the occurring work load scattering accurately and apply an efficient method for adjusting work load to capacity supply. In this way, the achievements of the logistical objectives are increased without causing additional costs.

Keywords: batch splitting, production logistics, production planning and control, quantification, load scattering

Procedia PDF Downloads 382
14188 Fourier Transform and Machine Learning Techniques for Fault Detection and Diagnosis of Induction Motors

Authors: Duc V. Nguyen

Abstract:

Induction motors are widely used in different industry areas and can experience various kinds of faults in stators and rotors. In general, fault detection and diagnosis techniques for induction motors can be supervised by measuring quantities such as noise, vibration, and temperature. The installation of mechanical sensors in order to assess the health conditions of a machine is typically only done for expensive or load-critical machines, where the high cost of a continuous monitoring system can be Justified. Nevertheless, induced current monitoring can be implemented inexpensively on machines with arbitrary sizes by using current transformers. In this regard, effective and low-cost fault detection techniques can be implemented, hence reducing the maintenance and downtime costs of motors. This work proposes a method for fault detection and diagnosis of induction motors, which combines classical fast Fourier transform and modern/advanced machine learning techniques. The proposed method is validated on real-world data and achieves a precision of 99.7% for fault detection and 100% for fault classification with minimal expert knowledge requirement. In addition, this approach allows users to be able to optimize/balance risks and maintenance costs to achieve the highest bene t based on their requirements. These are the key requirements of a robust prognostics and health management system.

Keywords: fault detection, FFT, induction motor, predictive maintenance

Procedia PDF Downloads 144
14187 A Case Study on the Numerical-Probability Approach for Deep Excavation Analysis

Authors: Komeil Valipourian

Abstract:

Urban advances and the growing need for developing infrastructures has increased the importance of deep excavations. In this study, after the introducing probability analysis as an important issue, an attempt has been made to apply it for the deep excavation project of Bangkok’s Metro as a case study. For this, the numerical probability model has been developed based on the Finite Difference Method and Monte Carlo sampling approach. The results indicate that disregarding the issue of probability in this project will result in an inappropriate design of the retaining structure. Therefore, probabilistic redesign of the support is proposed and carried out as one of the applications of probability analysis. A 50% reduction in the flexural strength of the structure increases the failure probability just by 8% in the allowable range and helps improve economic conditions, while maintaining mechanical efficiency. With regard to the lack of efficient design in most deep excavations, by considering geometrical and geotechnical variability, an attempt was made to develop an optimum practical design standard for deep excavations based on failure probability. On this basis, a practical relationship is presented for estimating the maximum allowable horizontal displacement, which can help improve design conditions without developing the probability analysis.

Keywords: numerical probability modeling, deep excavation, allowable maximum displacement, finite difference method (FDM)

Procedia PDF Downloads 109
14186 Determination of Pesticides Residues in Tissue of Two Freshwater Fish Species by Modified QuEChERS Method

Authors: Iwona Cieślik, Władysław Migdał, Kinga Topolska, Ewa Cieślik

Abstract:

The consumption of fish is recommended as a means of preventing serious diseases, especially cardiovascular problems. Fish is known to be a valuable source of protein (rich in essential amino acids), unsaturated fatty acids, fat-soluble vitamins, macro- and microelements. However, it can also contain several contaminants (e.g. pesticides, heavy metals) that may pose considerable risks for humans. Among others, pesticide are of special concern. Their widespread use has resulted in the contamination of environmental compartments, including water. The occurrence of pesticides in the environment is a serious problem, due to their potential toxicity. Therefore, a systematic monitoring is needed. The aim of the study was to determine the organochlorine and organophosphate pesticide residues in fish muscle tissues of the pike (Esox lucius, L.) and the rainbow trout (Oncorhynchus mykkis, Walbaum) by a modified QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method, using Gas Chromatography Quadrupole Mass Spectrometry (GC/Q-MS), working in selected-ion monitoring (SIM) mode. The analysis of α-HCH, β-HCH, lindane, diazinon, disulfoton, δ-HCH, methyl parathion, heptachlor, malathion, aldrin, parathion, heptachlor epoxide, γ-chlordane, endosulfan, α-chlordane, o,p'-DDE, dieldrin, endrin, 4,4'-DDD, ethion, endrin aldehyde, endosulfan sulfate, 4,4'-DDT, and metoxychlor was performed in the samples collected in the Carp Valley (Malopolska region, Poland). The age of the pike (n=6) was 3 years and its weight was 2-3 kg, while the age of the rainbow trout (n=6) was 0.5 year and its weight was 0.5-1.0 kg. Detectable pesticide (HCH isomers, endosulfan isomers, DDT and its metabolites as well as metoxychlor) residues were present in fish samples. However, all these compounds were below the limit of quantification (LOQ). The other examined pesticide residues were below the limit of detection (LOD). Therefore, the levels of contamination were - in all cases - below the default Maximum Residue Levels (MRLs), established by Regulation (EC) No 396/2005 of the European Parliament and of the Council. The monitoring of pesticide residues content in fish is required to minimize potential adverse effects on the environment and human exposure to these contaminants.

Keywords: contaminants, fish, pesticides residues, QuEChERS method

Procedia PDF Downloads 195
14185 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures

Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.

Abstract:

Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.

Keywords: CFD, reacting flow, DDT, gas explosion

Procedia PDF Downloads 70
14184 Seismic Response of Structure Using a Three Degree of Freedom Shake Table

Authors: Ketan N. Bajad, Manisha V. Waghmare

Abstract:

Earthquakes are the biggest threat to the civil engineering structures as every year it cost billions of dollars and thousands of deaths, around the world. There are various experimental techniques such as pseudo-dynamic tests – nonlinear structural dynamic technique, real time pseudo dynamic test and shaking table test method that can be employed to verify the seismic performance of structures. Shake table is a device that is used for shaking structural models or building components which are mounted on it. It is a device that simulates a seismic event using existing seismic data and nearly truly reproducing earthquake inputs. This paper deals with the use of shaking table test method to check the response of structure subjected to earthquake. The various types of shake table are vertical shake table, horizontal shake table, servo hydraulic shake table and servo electric shake table. The goal of this experiment is to perform seismic analysis of a civil engineering structure with the help of 3 degree of freedom (i.e. in X Y Z direction) shake table. Three (3) DOF shaking table is a useful experimental apparatus as it imitates a real time desired acceleration vibration signal for evaluating and assessing the seismic performance of structure. This study proceeds with the proper designing and erection of 3 DOF shake table by trial and error method. The table is designed to have a capacity up to 981 Newton. Further, to study the seismic response of a steel industrial building, a proportionately scaled down model is fabricated and tested on the shake table. The accelerometer is mounted on the model, which is used for recording the data. The experimental results obtained are further validated with the results obtained from software. It is found that model can be used to determine how the structure behaves in response to an applied earthquake motion, but the model cannot be used for direct numerical conclusions (such as of stiffness, deflection, etc.) as many uncertainties involved while scaling a small-scale model. The model shows modal forms and gives the rough deflection values. The experimental results demonstrate shake table as the most effective and the best of all methods available for seismic assessment of structure.

Keywords: accelerometer, three degree of freedom shake table, seismic analysis, steel industrial shed

Procedia PDF Downloads 119
14183 Life Cycle Assessment of a Parabolic Solar Cooker

Authors: Bastien Sanglard, Lou Magnat, Ligia Barna, Julian Carrey, Sébastien Lachaize

Abstract:

Cooking is a primary need for humans, several techniques being used around the globe based on different sources of energy: electricity, solid fuel (wood, coal...), fuel or liquefied petroleum gas. However, all of them leads to direct or indirect greenhouse gas emissions and sometimes health damage in household. Therefore, the solar concentrated power represent a great option to lower the damages because of a cleaner using phase. Nevertheless, the construction phase of the solar cooker still requires primary energy and materials, which leads to environmental impacts. The aims of this work is to analyse the ecological impacts of a commercialaluminium parabola and to compare it with other means of cooking, taking the boiling of 2 litres of water three times a day during 40 years as the functional unit. Life cycle assessment was performed using the software Umberto and the EcoInvent database. Calculations were realized over more than 13 criteria using two methods: the international panel on climate change method and the ReCiPe method. For the reflector itself, different aluminium provenances were compared, as well as the use of recycled aluminium. For the structure, aluminium was compared to iron (primary and recycled) and wood. Results show that climate impacts of the studied parabola was 0.0353 kgCO2eq/kWh when built with Chinese aluminium and can be reduced by 4 using aluminium from Canada. Assessment also showed that using 32% of recycled aluminium would reduce the impact by 1.33 and 1.43 compared to the use of primary Canadian aluminium and primary Chinese aluminium, respectively. The exclusive use of recycled aluminium lower the impact by 17. Besides, the use of iron (recycled or primary) or wood for the structure supporting the reflector significantly lowers the impact. The impact categories of the ReCiPe method show that the parabola made from Chinese aluminium has the heaviest impact - except for metal resource depletion - compared to aluminium from Canada, recycled aluminium or iron. Impact of solar cooking was then compared to gas stove and induction. The gas stove model was a cast iron tripod that supports the cooking pot, and the induction plate was as well a single spot plate. Results show the parabolic solar cooker has the lowest ecological impact over the 13 criteria of the ReCiPe method and over the global warming potential compared to the two other technologies. The climate impact of gas cooking is 0.628kgCO2/kWh when used with natural gas and 0.723 kgCO2/kWh when used with a bottle of gas. In each case, the main part of emissions came from gas burning. Induction cooking has a global warming potential of 0.12 kgCO2eq/kWh with the electricity mix of France, 96.3% of the impact being due to electricity production. Therefore, the electricity mix is a key factor for this impact: for instance, with the electricity mix of Germany and Poland, impacts are 0.81kgCO2eq/kWh and 1.39 kgCO2eq/kWh, respectively. Therefore, the parabolic solar cooker has a real ecological advantages compared to both gas stove and induction plate.

Keywords: life cycle assessement, solar concentration, cooking, sustainability

Procedia PDF Downloads 158
14182 Space Vector Pulse Width Modulation Based Design and Simulation of a Three-Phase Voltage Source Converter Systems

Authors: Farhan Beg

Abstract:

A space vector based pulse width modulation control technique for the three-phase PWM converter is proposed in this paper. The proposed control scheme is based on a synchronous reference frame model. High performance and efficiency is obtained with regards to the DC bus voltage and the power factor considerations of the PWM rectifier thus leading to low losses. MATLAB/SIMULINK are used as a platform for the simulations and a SIMULINK model is presented in the paper. The results show that the proposed model demonstrates better performance and properties compared to the traditional SPWM method and the method improves the dynamic performance of the closed loop drastically. For the space vector based pulse width modulation, sine signal is the reference waveform and triangle waveform is the carrier waveform. When the value of sine signal is larger than triangle signal, the pulse will start producing to high; and then when the triangular signals higher than sine signal, the pulse will come to low. SPWM output will change by changing the value of the modulation index and frequency used in this system to produce more pulse width. When more pulse width is produced, the output voltage will have lower harmonics contents and the resolution will increase.

Keywords: power factor, SVPWM, PWM rectifier, SPWM

Procedia PDF Downloads 317
14181 A Review on New Additives in Deep Soil Mixing Method

Authors: Meysam Mousakhani, Reza Ziaie Moayed

Abstract:

Considering the population growth and the needs of society, the improvement of problematic soils and the study of the application of different improvement methods have been considered. One of these methods is deep soil mixing, which has been developed in the past decade, especially in soft soils due to economic efficiency, simple implementation, and other benefits. The use of cement is criticized for its cost and the damaging environmental effects, so these factors lead us to use other additives along with cement in the deep soil mixing. Additives that are used today include fly ash, blast-furnace slag, glass powder, and potassium hydroxide. The present study provides a literature review on the application of different additives in deep soil mixing so that the best additives can be introduced from strength, economic, environmental and other perspectives. The results show that by replacing fly ash and slag with about 40 to 50% of cement, not only economic and environmental benefits but also a long-term strength comparable to cement would be achieved. The use of glass powder, especially in 3% mixing, results in desirable strength. In addition to the other benefits of these additives, potassium hydroxide can also be transported over longer distances, leading to wider soil improvement. Finally, this paper suggests further studies in terms of using other additives such as nanomaterials and zeolite, with different ratios, in different conditions and soils (silty sand, clayey sand, carbonate sand, sandy clay and etc.) in the deep mixing method.

Keywords: deep soil mix, soil stabilization, fly ash, ground improvement

Procedia PDF Downloads 123
14180 Effect of Red Cabbage Antioxidant Extracts on Lipid Oxidation of Fresh Tilapia

Authors: Ayse Demirbas, Bruce A. Welt, Yavuz Yagiz

Abstract:

Oxidation of polyunsaturated fatty acids (PUFA), eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) in fish causes loss of product quality. Oxidative rancidity causes loss of nutritional value and undesirable color changes. Therefore, powerful antioxidant extracts may provide a relatively low cost and natural means to reduce oxidation, resulting in longer, higher quality and higher value shelf life of foods. In this study, we measured effects of red cabbage antioxidant on lipid oxidation in fresh tilapia filets using thiobarbituric acid reactive substances (TBARS) assay, peroxide value (PV) and color assesment analysis. Extraction of red cabbage was performed using an efficient microwave method. Fresh tilapia filets were dipped in or sprayed with solutions containing different concentrations of extract. Samples were stored for up to 9 days at 4°C and analyzed every other day for color and lipid oxidation. Results showed that treated samples had lower oxidation than controls. Lipid peroxide values on treated samples showed benefits through day-7. Only slight differences were observed between spraying and dipping methods. This work shows that red cabbage antioxidant extracts may represent an inexpensive and all natural method for reducing oxidative spoilage of fresh fish.

Keywords: antioxidant, shelf life, fish, red cabbage, lipid oxidation

Procedia PDF Downloads 308
14179 Optimization of Surface Roughness in Additive Manufacturing Processes via Taguchi Methodology

Authors: Anjian Chen, Joseph C. Chen

Abstract:

This paper studies a case where the targeted surface roughness of fused deposition modeling (FDM) additive manufacturing process is improved. The process is designing to reduce or eliminate the defects and improve the process capability index Cp and Cpk for an FDM additive manufacturing process. The baseline Cp is 0.274 and Cpk is 0.654. This research utilizes the Taguchi methodology, to eliminate defects and improve the process. The Taguchi method is used to optimize the additive manufacturing process and printing parameters that affect the targeted surface roughness of FDM additive manufacturing. The Taguchi L9 orthogonal array is used to organize the parameters' (four controllable parameters and one non-controllable parameter) effectiveness on the FDM additive manufacturing process. The four controllable parameters are nozzle temperature [°C], layer thickness [mm], nozzle speed [mm/s], and extruder speed [%]. The non-controllable parameter is the environmental temperature [°C]. After the optimization of the parameters, a confirmation print was printed to prove that the results can reduce the amount of defects and improve the process capability index Cp from 0.274 to 1.605 and the Cpk from 0.654 to 1.233 for the FDM additive manufacturing process. The final results confirmed that the Taguchi methodology is sufficient to improve the surface roughness of FDM additive manufacturing process.

Keywords: additive manufacturing, fused deposition modeling, surface roughness, six-sigma, Taguchi method, 3D printing

Procedia PDF Downloads 368
14178 Exploring an Exome Target Capture Method for Cross-Species Population Genetic Studies

Authors: Benjamin A. Ha, Marco Morselli, Xinhui Paige Zhang, Elizabeth A. C. Heath-Heckman, Jonathan B. Puritz, David K. Jacobs

Abstract:

Next-generation sequencing has enhanced the ability to acquire massive amounts of sequence data to address classic population genetic questions for non-model organisms. Targeted approaches allow for cost effective or more precise analyses of relevant sequences; although, many such techniques require a known genome and it can be costly to purchase probes from a company. This is challenging for non-model organisms with no published genome and can be expensive for large population genetic studies. Expressed exome capture sequencing (EecSeq) synthesizes probes in the lab from expressed mRNA, which is used to capture and sequence the coding regions of genomic DNA from a pooled suite of samples. A normalization step produces probes to recover transcripts from a wide range of expression levels. This approach offers low cost recovery of a broad range of genes in the genome. This research project expands on EecSeq to investigate if mRNA from one taxon may be used to capture relevant sequences from a series of increasingly less closely related taxa. For this purpose, we propose to use the endangered Northern Tidewater goby, Eucyclogobius newberryi, a non-model organism that inhabits California coastal lagoons. mRNA will be extracted from E. newberryi to create probes and capture exomes from eight other taxa, including the more at-risk Southern Tidewater goby, E. kristinae, and more divergent species. Captured exomes will be sequenced, analyzed bioinformatically and phylogenetically, then compared to previously generated phylogenies across this group of gobies. This will provide an assessment of the utility of the technique in cross-species studies and for analyzing low genetic variation within species as is the case for E. kristinae. This method has potential applications to provide economical ways to expand population genetic and evolutionary biology studies for non-model organisms.

Keywords: coastal lagoons, endangered species, non-model organism, target capture method

Procedia PDF Downloads 169