Search results for: loci method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18691

Search results for: loci method

14281 A Study on Improvement of the Electromagnetic Vibration of a Polygon Mirror Scanner Motor

Authors: Yongmin You

Abstract:

Electric machines for office automation device such as printer and scanner have been required the low noise and vibration performance. Many researches about the low noise and vibration of polygon mirror scanner motor have been also progressed. The noise and vibration of polygon mirror scanner motor can be classified by aerodynamic, structural and electromagnetic. Electromagnetic noise and vibration can be occurred by high cogging torque and nonsinusoidal back EMF. To improve the cogging torque and back EMF characteristic, we apply unequal air-gap. To analyze characteristic of a polygon mirror scanner motor, two dimensional finite element method is used. To minimize the cogging torque of a polygon mirror motor, Kriging based on latin hypercube sampling (LHS) is utilized. Compared to the initial model, the torque ripple of the optimized unequal air-gap model was reduced by 23.4 % while maintaining the back EMF and average torque. To verify the optimal design results, the experiment was performed. We measured the vibration in motors at 23,600 rpm which is the rated velocity. The radial and axial gravitational acceleration of the optimal model were declined more than seven times and three times, respectively. From these results, a shape optimized unequal polygon mirror scanner motor has shown the usefulness of an improvement in the torque ripple and electromagnetic vibration characteristic.

Keywords: polygon mirror scanner motor, optimal design, finite element method, vibration

Procedia PDF Downloads 327
14280 Action Research of Local Resident Empowerment in Prambanan Cultural Heritage Area in Yogyakarta

Authors: Destha Titi Raharjana

Abstract:

The finding of this research results from three action researches conducted in three rurals, namely Bokoharjo, Sambirejo, and Tirtomartani. Those rurals are close to Prambanan, a well-known cultural heritage site located in Sleman Regency, Indonesia. This action research is conducted using participative method through observation, interview, and focus group discussion with local residents as the subjects. This research aims to (a) present identifications of potencies, obstacles, and opportunities existed in development process, which is able to give more encouragement, involvement and empowerment for local residents in maintaining the cultural heritage area, (b) present participatory empowerment programs which adjust the needs of local residents and human resources, and (c) identify potential stakeholders that can support empowerment programs. Through action research method, this research is able to present (a) potential mapping; difficulties and opportunities in the development process in each rural, (b) empowerment program planning needed by local residents as a follow-up of this action research. Moreover, this research also presents identifications of potential stakeholders who are able to do an empowerment program follow-up. It is expected that, at the end of the programs, the local residents are able to maintain Prambanan, as one of cultural heritage sites that needs to be protected, in a more sustainable way.

Keywords: action research, local resident, empowerment, cultural heritage area, Prambanan, Sleman, Indonesia

Procedia PDF Downloads 235
14279 The Effects of Addition of Chloride Ions on the Properties of ZnO Nanostructures Grown by Electrochemical Deposition

Authors: L. Mentar, O. Baka, A. Azizi

Abstract:

Zinc oxide as a wide band semiconductor materials, especially nanostructured materials, have potential applications in large-area such as electronics, sensors, photovoltaic cells, photonics, optical devices and optoelectronics due to their unique electrical and optical properties and surface properties. The feasibility of ZnO for these applications is due to the successful synthesis of diverse ZnO nanostructures, including nanorings, nanobows, nanohelixes, nanosprings, nanobelts, nanotubes, nanopropellers, nanodisks, and nanocombs, by different method. Among various synthesis methods, electrochemical deposition represents a simple and inexpensive solution based method for synthesis of semiconductor nanostructures. In this study, the electrodeposition method was used to produce zinc oxide (ZnO) nanostructures on fluorine-doped tin oxide (FTO)-coated conducting glass substrate as TCO from chloride bath. We present a systematic study on the effects of the concentration of chloride anion on the properties of ZnO. The influence of KCl concentrations on the electrodeposition process, morphological, structural and optical properties of ZnO nanostructures was examined. In this research electrochemical deposition of ZnO nanostructures is investigated using conventional electrochemical measurements (cyclic voltammetry and Mott-Schottky), scanning electron microscopy (SEM), and X-ray diffraction (XRD) techniques. The potentials of electrodeposition of ZnO were determined using the cyclic voltammetry. From the Mott-Schottky measurements, the flat-band potential and the donor density for the ZnO nanostructure are determined. SEM images shows different size and morphology of the nanostructures and depends greatly on the KCl concentrations. The morphology of ZnO nanostructures is determined by the corporated action between [Zn(NO3)2] and [Cl-].Very netted hexagonal grains are observed for the nanostructures deposited at 0.1M of KCl. XRD studies revealed that the all deposited films were polycrystalline in nature with wurtzite phase. The electrodeposited thin films are found to have preferred oriented along (002) plane of the wurtzite structure of ZnO with c-axis normal to the substrate surface for sample at different concentrations of KCl. UV-Visible spectra showed a significant optical transmission (~80%), which decreased with low Cl-1 concentrations. The energy band gap values have been estimated to be between 3.52 and 3.80 eV.

Keywords: electrodeposition, ZnO, chloride ions, Mott-Schottky, SEM, XRD

Procedia PDF Downloads 278
14278 Development of Advanced Linear Calibration Technique for Air Flow Sensing by Using CTA-Based Hot Wire Anemometry

Authors: Ming-Jong Tsai, T. M. Wu, R. C. Chu

Abstract:

The purpose of this study is to develop an Advanced linear calibration Technique for air flow sensing by using CTA-based Hot wire Anemometry. It contains a host PC with Human Machine Interface, a wind tunnel, a wind speed controller, an automatic data acquisition module, and nonlinear calibration model. To improve the fitting error by using single fitting polynomial, this study proposes a Multiple three-order Polynomial Fitting Method (MPFM) for fitting the non-linear output of a CTA-based Hot wire Anemometry. The CTA-based anemometer with built-in fitting parameters is installed in the wind tunnel, and the wind speed is controlled by the PC-based controller. The Hot-Wire anemometer's thermistor resistance change is converted into a voltage signal or temperature differences, and then sent to the PC through a DAQ card. After completion measurements of original signal, the Multiple polynomial mathematical coefficients can be automatically calculated, and then sent into the micro-processor in the Hot-Wire anemometer. Finally, the corrected Hot-Wire anemometer is verified for the linearity, the repeatability, error percentage, and the system outputs quality control reports.

Keywords: flow rate sensing, hot wire, constant temperature anemometry (CTA), linear calibration, multiple three-order polynomial fitting method (MPFM), temperature compensation

Procedia PDF Downloads 402
14277 Experimental Analysis of Structure Borne Noise in an Enclosure

Authors: Waziralilah N. Fathiah, A. Aminudin, U. Alyaa Hashim, T. Vikneshvaran D. Shakirah Shukor

Abstract:

This paper presents the experimental analysis conducted on a structure borne noise in a rectangular enclosure prototype made by joining of sheet aluminum metal and plywood. The study is significant as many did not realized the annoyance caused by structural borne-noise. In this study, modal analysis is carried out to seek the structure’s behaviour in order to identify the characteristics of enclosure in frequency domain ranging from 0 Hz to 200 Hz. Here, numbers of modes are identified and the characteristic of mode shape is categorized. Modal experiment is used to diagnose the structural behaviour while microphone is used to diagnose the sound. Spectral testing is performed on the enclosure. It is acoustically excited using shaker and as it vibrates, the vibrational and noise responses sensed by tri-axis accelerometer and microphone sensors are recorded respectively. Experimental works is performed on each node lies on the gridded surface of the enclosure. Both experimental measurement is carried out simultaneously. The modal experimental results of the modal modes are validated by simulation performed using MSC Nastran software. In pursuance of reducing the structure borne-noise, mitigation method is used whereby the stiffener plates are perpendicularly placed on the sheet aluminum metal. By using this method, reduction in structure borne-noise is successfully made at the end of the study.

Keywords: enclosure, modal analysis, sound analysis, structure borne-noise

Procedia PDF Downloads 417
14276 Increasing Health Education Tools Satisfaction in Nursing Staffs

Authors: Lu Yu Jyun

Abstract:

Background: Health education is important nursing work aiming to strengthen patients’ self-caring ability and family members. Our department educates through three methods, including speech education, flyer and demonstration video education. The satisfaction rate of health education tool use is 54.3% in nursing staff. The main reason is there hadn’t been a storage area for flyers, causing extra workload in assessing flyers. The satisfaction rate of health education in patients and families is 70.7%. We aim to improve this situation between 13th April and 6th June 2021. Method: We introduce the ECRS method to erase repetitive and redundant actions. We redesign the health education tool usage workflow to improve nursing staffs’ efficiency and further enhance nursing staffs care quality and working satisfaction. Result: The satisfaction rate of health education tool usage in nursing staff elevated from 54.3% to 92.5%. The satisfaction rate of health education in patients and families elevated from 70.7% to 90.2%. Conclusion: The assessment time of health care tools dropped from 10minutes to 3minutes. This significantly reduced the nursing staffs’ workload. 1213 paper is saved in one month and 14,556 a year in the estimate; we save the environment via this action. Health education map implemented in other nursing departments since October due to its’ high efficiency and makes health care tools more humanize.

Keywords: health, education tools, satisfaction, nursing staff

Procedia PDF Downloads 132
14275 Dynamic Distribution Calibration for Improved Few-Shot Image Classification

Authors: Majid Habib Khan, Jinwei Zhao, Xinhong Hei, Liu Jiedong, Rana Shahzad Noor, Muhammad Imran

Abstract:

Deep learning is increasingly employed in image classification, yet the scarcity and high cost of labeled data for training remain a challenge. Limited samples often lead to overfitting due to biased sample distribution. This paper introduces a dynamic distribution calibration method for few-shot learning. Initially, base and new class samples undergo normalization to mitigate disparate feature magnitudes. A pre-trained model then extracts feature vectors from both classes. The method dynamically selects distribution characteristics from base classes (both adjacent and remote) in the embedding space, using a threshold value approach for new class samples. Given the propensity of similar classes to share feature distributions like mean and variance, this research assumes a Gaussian distribution for feature vectors. Subsequently, distributional features of new class samples are calibrated using a corrected hyperparameter, derived from the distribution features of both adjacent and distant base classes. This calibration augments the new class sample set. The technique demonstrates significant improvements, with up to 4% accuracy gains in few-shot classification challenges, as evidenced by tests on miniImagenet and CUB datasets.

Keywords: deep learning, computer vision, image classification, few-shot learning, threshold

Procedia PDF Downloads 48
14274 3D Modeling for Frequency and Time-Domain Airborne EM Systems with Topography

Authors: C. Yin, B. Zhang, Y. Liu, J. Cai

Abstract:

Airborne EM (AEM) is an effective geophysical exploration tool, especially suitable for ridged mountain areas. In these areas, topography will have serious effects on AEM system responses. However, until now little study has been reported on topographic effect on airborne EM systems. In this paper, an edge-based unstructured finite-element (FE) method is developed for 3D topographic modeling for both frequency and time-domain airborne EM systems. Starting from the frequency-domain Maxwell equations, a vector Helmholtz equation is derived to obtain a stable and accurate solution. Considering that the AEM transmitter and receiver are both located in the air, the scattered field method is used in our modeling. The Galerkin method is applied to discretize the Helmholtz equation for the final FE equations. Solving the FE equations, the frequency-domain AEM responses are obtained. To accelerate the calculation speed, the response of source in free-space is used as the primary field and the PARDISO direct solver is used to deal with the problem with multiple transmitting sources. After calculating the frequency-domain AEM responses, a Hankel’s transform is applied to obtain the time-domain AEM responses. To check the accuracy of present algorithm and to analyze the characteristic of topographic effect on airborne EM systems, both the frequency- and time-domain AEM responses for 3 model groups are simulated: 1) a flat half-space model that has a semi-analytical solution of EM response; 2) a valley or hill earth model; 3) a valley or hill earth with an abnormal body embedded. Numerical experiments show that close to the node points of the topography, AEM responses demonstrate sharp changes. Special attentions need to be paid to the topographic effects when interpreting AEM survey data over rugged topographic areas. Besides, the profile of the AEM responses presents a mirror relation with the topographic earth surface. In comparison to the topographic effect that mainly occurs at the high-frequency end and early time channels, the EM responses of underground conductors mainly occur at low frequencies and later time channels. For the signal of the same time channel, the dB/dt field reflects the change of conductivity better than the B-field. The research of this paper will serve airborne EM in the identification and correction of the topographic effects.

Keywords: 3D, Airborne EM, forward modeling, topographic effect

Procedia PDF Downloads 299
14273 Numerical Simulation of Three-Dimensional Cavitating Turbulent Flow in Francis Turbines with ANSYS

Authors: Raza Abdulla Saeed

Abstract:

In this study, the three-dimensional cavitating turbulent flow in a complete Francis turbine is simulated using mixture model for cavity/liquid two-phase flows. Numerical analysis is carried out using ANSYS CFX software release 12, and standard k-ε turbulence model is adopted for this analysis. The computational fluid domain consist of spiral casing, stay vanes, guide vanes, runner and draft tube. The computational domain is discretized with a three-dimensional mesh system of unstructured tetrahedron mesh. The finite volume method (FVM) is used to solve the governing equations of the mixture model. Results of cavitation on the runner’s blades under three different boundary conditions are presented and discussed. From the numerical results it has been found that the numerical method was successfully applied to simulate the cavitating two-phase turbulent flow through a Francis turbine, and also cavitation is clearly predicted in the form of water vapor formation inside the turbine. By comparison the numerical prediction results with a real runner; it’s shown that the region of higher volume fraction obtained by simulation is consistent with the region of runner cavitation damage.

Keywords: computational fluid dynamics, hydraulic francis turbine, numerical simulation, two-phase mixture cavitation model

Procedia PDF Downloads 540
14272 Compression Index Estimation by Water Content and Liquid Limit and Void Ratio Using Statistics Method

Authors: Lizhou Chen, Abdelhamid Belgaid, Assem Elsayed, Xiaoming Yang

Abstract:

Compression index is essential in foundation settlement calculation. The traditional method for determining compression index is consolidation test which is expensive and time consuming. Many researchers have used regression methods to develop empirical equations for predicting compression index from soil properties. Based on a large number of compression index data collected from consolidation tests, the accuracy of some popularly empirical equations were assessed. It was found that primary compression index is significantly overestimated in some equations while it is underestimated in others. The sensitivity analyses of soil parameters including water content, liquid limit and void ratio were performed. The results indicate that the compression index obtained from void ratio is most accurate. The ANOVA (analysis of variance) demonstrates that the equations with multiple soil parameters cannot provide better predictions than the equations with single soil parameter. In other words, it is not necessary to develop the relationships between compression index and multiple soil parameters. Meanwhile, it was noted that secondary compression index is approximately 0.7-5.0% of primary compression index with an average of 2.0%. In the end, the proposed prediction equations using power regression technique were provided that can provide more accurate predictions than those from existing equations.

Keywords: compression index, clay, settlement, consolidation, secondary compression index, soil parameter

Procedia PDF Downloads 147
14271 Triangulations via Iterated Largest Angle Bisection

Authors: Yeonjune Kang

Abstract:

A triangulation of a planar region is a partition of that region into triangles. In the finite element method, triangulations are often used as the grid underlying a computation. In order to be suitable as a finite element mesh, a triangulation must have well-shaped triangles, according to criteria that depend on the details of the particular problem. For instance, most methods require that all triangles be small and as close to the equilateral shape as possible. Stated differently, one wants to avoid having either thin or flat triangles in the triangulation. There are many triangulation procedures, a particular one being the one known as the longest edge bisection algorithm described below. Starting with a given triangle, locate the midpoint of the longest edge and join it to the opposite vertex of the triangle. Two smaller triangles are formed; apply the same bisection procedure to each of these triangles. Continuing in this manner after n steps one obtains a triangulation of the initial triangle into 2n smaller triangles. The longest edge algorithm was first considered in the late 70’s. It was shown by various authors that this triangulation has the desirable properties for the finite element method: independently of the number of iterations the angles of these triangles cannot get too small; moreover, the size of the triangles decays exponentially. In the present paper we consider a related triangulation algorithm we refer to as the largest angle bisection procedure. As the name suggests, rather than bisecting the longest edge, at each step we bisect the largest angle. We study the properties of the resulting triangulation and prove that, while the general behavior resembles the one in the longest edge bisection algorithm, there are several notable differences as well.

Keywords: angle bisectors, geometry, triangulation, applied mathematics

Procedia PDF Downloads 378
14270 Traditional Drawing, BIM and Erudite Design Process

Authors: Maryam Kalkatechi

Abstract:

Nowadays, parametric design, scientific analysis, and digital fabrication are dominant. Many architectural practices are increasingly seeking to incorporate advanced digital software and fabrication in their projects. Proposing an erudite design process that combines digital and practical aspects in a strong frame within the method was resulted from the dissertation research. The digital aspects are the progressive advancements in algorithm design and simulation software. These aspects have assisted the firms to develop more holistic concepts at the early stage and maintain collaboration among disciplines during the design process. The erudite design process enhances the current design processes by encouraging the designer to implement the construction and architecture knowledge within the algorithm to make successful design processes. The erudite design process also involves the ongoing improvements of applying the new method of 3D printing in construction. This is achieved through the ‘data-sketches’. The term ‘data-sketch’ was developed by the author in the dissertation that was recently completed. It accommodates the decisions of the architect on the algorithm. This paper introduces the erudite design process and its components. It will summarize the application of this process in development of the ‘3D printed construction unit’. This paper contributes to overlaying the academic and practice with advanced technology by presenting a design process that transfers the dominance of tool to the learned architect and encourages innovation in design processes.

Keywords: erudite, data-sketch, algorithm design in architecture, design process

Procedia PDF Downloads 261
14269 A Simple Computational Method for the Gravitational and Seismic Soil-Structure-Interaction between New and Existent Buildings Sites

Authors: Nicolae Daniel Stoica, Ion Mierlus Mazilu

Abstract:

This work is one of numerical research and aims to address the issue of the design of new buildings in a 3D location of existing buildings. In today's continuous development and congestion of urban centers is a big question about the influence of the new buildings on an already existent vicinity site. Thus, in this study, we tried to focus on how existent buildings may be affected by any newly constructed buildings and in how far this influence is really decreased. The problem of modeling the influence of interaction between buildings is not simple in any area in the world, and neither in Romania. Unfortunately, most often the designers not done calculations that can determine how close to reality these 3D influences nor the simplified method and the more superior methods. In the most literature making a "shield" (the pilots or molded walls) is absolutely sufficient to stop the influence between the buildings, and so often the soil under the structure is ignored in the calculation models. The main causes for which the soil is neglected in the analysis are related to the complexity modeling of interaction between soil and structure. In this paper, based on a new simple but efficient methodology we tried to determine for a lot of study cases the influence, in terms of assessing the interaction land structure on the behavior of structures that influence a new building on an existing one. The study covers additional subsidence that may occur during the execution of new works and after its completion. It also highlighted the efforts diagrams and deflections in the soil for both the original case and the final stage. This is necessary to see to what extent the expected impact of the new building on existing areas.

Keywords: soil, structure, interaction, piles, earthquakes

Procedia PDF Downloads 278
14268 Study of Climate Change Process on Hyrcanian Forests Using Dendroclimatology Indicators (Case Study of Guilan Province)

Authors: Farzad Shirzad, Bohlol Alijani, Mehry Akbary, Mohammad Saligheh

Abstract:

Climate change and global warming are very important issues today. The process of climate change, especially changes in temperature and precipitation, is the most important issue in the environmental sciences. Climate change means changing the averages in the long run. Iran is located in arid and semi-arid regions due to its proximity to the equator and its location in the subtropical high pressure zone. In this respect, the Hyrcanian forest is a green necklace between the Caspian Sea and the south of the Alborz mountain range. In the forty-third session of UNESCO, it was registered as the second natural heritage of Iran. Beech is one of the most important tree species and the most industrial species of Hyrcanian forests. In this research, using dendroclimatology, the width of the tree ring, and climatic data of temperature and precipitation from Shanderman meteorological station located in the study area, And non-parametric Mann-Kendall statistical method to investigate the trend of climate change over a time series of 202 years of growth ringsAnd Pearson statistical method was used to correlate the growth of "ring" growth rings of beech trees with climatic variables in the region. The results obtained from the time series of beech growth rings showed that the changes in beech growth rings had a downward and negative trend and were significant at the level of 5% and climate change occurred. The average minimum, medium, and maximum temperatures and evaporation in the growing season had an increasing trend, and the annual precipitation had a decreasing trend. Using Pearson method during fitting the correlation of diameter of growth rings with temperature, for the average in July, August, and September, the correlation is negative, and the average temperature in July, August, and September is negative, and for the average The average maximum temperature in February was correlation-positive and at the level of 95% was significant, and with precipitation, in June the correlation was at the level of 95% positive and significant.

Keywords: climate change, dendroclimatology, hyrcanian forest, beech

Procedia PDF Downloads 88
14267 Nostalgic Tourism in Macau: The Bidirectional Causal Relationship between Destination Image and Experiential Value

Authors: Aliana Leong, T. C. Huan

Abstract:

The purpose of Nostalgic themed tourism product is becoming popular in many countries. This study intends to investigate the role of nostalgia in destination image, experiential value and their effect on subsequent behavioral intention. The survey used stratified sampling method to include respondents from all the nearby Asian regions. The sampling is based on the data of inbound tourists provided by the Statistics and Census Service (DSEC) of government of Macau. The questionnaire consisted of five sections of 5 point Likert scale questions: (1) nostalgia, (2) destination image both before and after experience, (3) expected value, (4) experiential value, and (5) future visit intention. Data was analysed with structural equation modelling. The result indicates that nostalgia plays an important part in forming destination image and experiential value before individual had a chance to experience the destination. The destination image and experiential value share a bidirectional causal relationship that eventually contributes to future visit intention. The study also discovered that while experiential value is more effective in generating destination image, the later contribute more to future visit intention. The research design measures destination image and experiential value before and after respondents had experience the destination. The distinction between destination image and expected/experiential value can be examined because the longitudinal design of research method. It also allows this study to observe how nostalgia translates to future visit intention.

Keywords: nostalgia, destination image, experiential value, future visit intention

Procedia PDF Downloads 380
14266 Optimization of Cutting Parameters on Delamination Using Taguchi Method during Drilling of GFRP Composites

Authors: Vimanyu Chadha, Ranganath M. Singari

Abstract:

Drilling composite materials is a frequently practiced machining process during assembling in various industries such as automotive and aerospace. However, drilling of glass fiber reinforced plastic (GFRP) composites is significantly affected by damage tendency of these materials under cutting forces such as thrust force and torque. The aim of this paper is to investigate the influence of the various cutting parameters such as cutting speed and feed rate; subsequently also to study the influence of number of layers on delamination produced while drilling a GFRP composite. A plan of experiments, based on Taguchi techniques, was instituted considering drilling with prefixed cutting parameters in a hand lay-up GFRP material. The damage induced associated with drilling GFRP composites were measured. Moreover, Analysis of Variance (ANOVA) was performed to obtain minimization of delamination influenced by drilling parameters and number layers. The optimum drilling factor combination was obtained by using the analysis of signal-to-noise ratio. The conclusion revealed that feed rate was the most influential factor on the delamination. The best results of the delamination were obtained with composites with a greater number of layers at lower cutting speeds and feed rates.

Keywords: analysis of variance, delamination, design optimization, drilling, glass fiber reinforced plastic composites, Taguchi method

Procedia PDF Downloads 243
14265 A Machine Learning Framework Based on Biometric Measurements for Automatic Fetal Head Anomalies Diagnosis in Ultrasound Images

Authors: Hanene Sahli, Aymen Mouelhi, Marwa Hajji, Amine Ben Slama, Mounir Sayadi, Farhat Fnaiech, Radhwane Rachdi

Abstract:

Fetal abnormality is still a public health problem of interest to both mother and baby. Head defect is one of the most high-risk fetal deformities. Fetal head categorization is a sensitive task that needs a massive attention from neurological experts. In this sense, biometrical measurements can be extracted by gynecologist doctors and compared with ground truth charts to identify normal or abnormal growth. The fetal head biometric measurements such as Biparietal Diameter (BPD), Occipito-Frontal Diameter (OFD) and Head Circumference (HC) needs to be monitored, and expert should carry out its manual delineations. This work proposes a new approach to automatically compute BPD, OFD and HC based on morphological characteristics extracted from head shape. Hence, the studied data selected at the same Gestational Age (GA) from the fetal Ultrasound images (US) are classified into two categories: Normal and abnormal. The abnormal subjects include hydrocephalus, microcephaly and dolichocephaly anomalies. By the use of a support vector machines (SVM) method, this study achieved high classification for automated detection of anomalies. The proposed method is promising although it doesn't need expert interventions.

Keywords: biometric measurements, fetal head malformations, machine learning methods, US images

Procedia PDF Downloads 277
14264 Numerical Investigation of Turbulent Flow Control by Suction and Injection on a Subsonic NACA23012 Airfoil by Proper Orthogonal Decomposition Analysis and Perturbed Reynolds Averaged Navier‐Stokes Equations

Authors: Azam Zare

Abstract:

Separation flow control for performance enhancement over airfoils at high incidence angle has become an increasingly important topic. This work details the characteristics of an efficient feedback control of the turbulent subsonic flow over NACA23012 airfoil using forced reduced‐order model based on the proper orthogonal decomposition/Galerkin projection and perturbation method on the compressible Reynolds Averaged Navier‐Stokes equations. The forced reduced‐order model is used in the optimal control of the turbulent separated flow over a NACA23012 airfoil at Mach number of 0.2, Reynolds number of 5×106, and high incidence angle of 24° using blowing/suction controlling jets. The Spallart-Almaras turbulence model is implemented for high Reynolds number calculations. The main shortcoming of the POD/Galerkin projection on flow equations for controlling purposes is that the blowing/suction controlling jet velocity does not show up explicitly in the resulting reduced order model. Combining perturbation method and POD/Galerkin projection on flow equations introduce a forced reduced‐order model that can predict the time-varying influence of the blowing/suction controlling jet velocity. An optimal control theory based on forced reduced‐order system is used to design a control law for a nonlinear reduced‐order model, which attempts to minimize the vorticity content in the turbulent flow field over NACA23012 airfoil. Numerical simulations were performed to help understand the behavior of the controlled suction jet at 12% to 18% chord from leading edge and a pair of blowing/suction jets at 15% to 18% and 24% to 30% chord from leading edge, respectively. Analysis of streamline profiles indicates that the blowing/suction jets are efficient in removing separation bubbles and increasing the lift coefficient up to 22%, while the perturbation method can predict the flow field in an accurate Manner.

Keywords: flow control, POD, Galerkin projection, separation

Procedia PDF Downloads 140
14263 A Neural Network Approach to Understanding Turbulent Jet Formations

Authors: Nurul Bin Ibrahim

Abstract:

Advancements in neural networks have offered valuable insights into Fluid Dynamics, notably in addressing turbulence-related challenges. In this research, we introduce multiple applications of models of neural networks, namely Feed-Forward and Recurrent Neural Networks, to explore the relationship between jet formations and stratified turbulence within stochastically excited Boussinesq systems. Using machine learning tools like TensorFlow and PyTorch, the study has created models that effectively mimic and show the underlying features of the complex patterns of jet formation and stratified turbulence. These models do more than just help us understand these patterns; they also offer a faster way to solve problems in stochastic systems, improving upon traditional numerical techniques to solve stochastic differential equations such as the Euler-Maruyama method. In addition, the research includes a thorough comparison with the Statistical State Dynamics (SSD) approach, which is a well-established method for studying chaotic systems. This comparison helps evaluate how well neural networks can help us understand the complex relationship between jet formations and stratified turbulence. The results of this study underscore the potential of neural networks in computational physics and fluid dynamics, opening up new possibilities for more efficient and accurate simulations in these fields.

Keywords: neural networks, machine learning, computational fluid dynamics, stochastic systems, simulation, stratified turbulence

Procedia PDF Downloads 54
14262 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers

Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta

Abstract:

The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.

Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation

Procedia PDF Downloads 48
14261 Finite Volume Method Simulations of GaN Growth Process in MOVPE Reactor

Authors: J. Skibinski, P. Caban, T. Wejrzanowski, K. J. Kurzydlowski

Abstract:

In the present study, numerical simulations of heat and mass transfer during gallium nitride growth process in Metal Organic Vapor Phase Epitaxy reactor AIX-200/4RF-S is addressed. Existing knowledge about phenomena occurring in the MOVPE process allows to produce high quality nitride based semiconductors. However, process parameters of MOVPE reactors can vary in certain ranges. Main goal of this study is optimization of the process and improvement of the quality of obtained crystal. In order to investigate this subject a series of computer simulations have been performed. Numerical simulations of heat and mass transfer in GaN epitaxial growth process have been performed to determine growth rate for various mass flow rates and pressures of reagents. According to the fact that it’s impossible to determine experimentally the exact distribution of heat and mass transfer inside the reactor during the process, modeling is the only solution to understand the process precisely. Main heat transfer mechanisms during MOVPE process are convection and radiation. Correlation of modeling results with the experiment allows to determine optimal process parameters for obtaining crystals of highest quality.

Keywords: Finite Volume Method, semiconductors, epitaxial growth, metalorganic vapor phase epitaxy, gallium nitride

Procedia PDF Downloads 381
14260 Penalization of Transnational Crimes in the Domestic Legal Order: The Case of Poland

Authors: Magda Olesiuk-Okomska

Abstract:

The degree of international interdependence has grown significantly. Poland is a party to nearly 1000 binding multilateral treaties, including international legal instruments devoted to criminal matters and obliging the state to penalize certain crimes. The paper presents results of a theoretical research conducted as a part of doctoral research. The main hypothesis assumed that there was a separate category of crimes to penalization of which Poland was obliged under international legal instruments; that a catalogue of such crimes and a catalogue of international legal instruments providing for Poland’s international obligations had never been compiled in the domestic doctrine, thus there was no mechanism for monitoring implementation of such obligations. In the course of the research, a definition of transnational crimes was discussed and confronted with notions of international crimes, treaty crimes, as well as cross-border crimes. A list of transnational crimes penalized in the Polish Penal Code as well as in non-code criminal law regulations was compiled; international legal instruments, obliging Poland to criminalize and penalize specific conduct, were enumerated and catalogued. It enabled the determination whether Poland’s international obligations were implemented in domestic legislation, as well as the formulation of de lege lata and de lege ferenda postulates. Implemented research methods included inter alia a dogmatic and legal method, an analytical method and desk research.

Keywords: international criminal law, transnational crimes, transnational criminal law, treaty crimes

Procedia PDF Downloads 211
14259 Evaluation of Batch Splitting in the Context of Load Scattering

Authors: S. Wesebaum, S. Willeke

Abstract:

Production companies are faced with an increasingly turbulent business environment, which demands very high production volumes- and delivery date flexibility. If a decoupling by storage stages is not possible (e.g. at a contract manufacturing company) or undesirable from a logistical point of view, load scattering effects the production processes. ‘Load’ characterizes timing and quantity incidence of production orders (e.g. in work content hours) to workstations in the production, which results in specific capacity requirements. Insufficient coordination between load (demand capacity) and capacity supply results in heavy load scattering, which can be described by deviations and uncertainties in the input behavior of a capacity unit. In order to respond to fluctuating loads, companies try to implement consistent and realizable input behavior using the capacity supply available. For example, a uniform and high level of equipment capacity utilization keeps production costs down. In contrast, strong load scattering at workstations leads to performance loss or disproportionately fluctuating WIP, whereby the logistics objectives are affected negatively. Options for reducing load scattering are e.g. shifting the start and end dates of orders, batch splitting and outsourcing of operations or shifting to other workstations. This leads to an adjustment of load to capacity supply, and thus to a reduction of load scattering. If the adaptation of load to capacity cannot be satisfied completely, possibly flexible capacity must be used to ensure that the performance of a workstation does not decrease for a given load. Where the use of flexible capacities normally raises costs, an adjustment of load to capacity supply reduces load scattering and, in consequence, costs. In the literature you mostly find qualitative statements for describing load scattering. Quantitative evaluation methods that describe load mathematically are rare. In this article the authors discuss existing approaches for calculating load scattering and their various disadvantages such as lack of opportunity for normalization. These approaches are the basis for the development of our mathematical quantification approach for describing load scattering that compensates the disadvantages of the current quantification approaches. After presenting our mathematical quantification approach, the method of batch splitting will be described. Batch splitting allows the adaptation of load to capacity to reduce load scattering. After describing the method, it will be explicitly analyzed in the context of the logistic curve theory by Nyhuis using the stretch factor α1 in order to evaluate the impact of the method of batch splitting on load scattering and on logistic curves. The conclusion of this article will be to show how the methods and approaches presented can help companies in a turbulent environment to quantify the occurring work load scattering accurately and apply an efficient method for adjusting work load to capacity supply. In this way, the achievements of the logistical objectives are increased without causing additional costs.

Keywords: batch splitting, production logistics, production planning and control, quantification, load scattering

Procedia PDF Downloads 385
14258 Fourier Transform and Machine Learning Techniques for Fault Detection and Diagnosis of Induction Motors

Authors: Duc V. Nguyen

Abstract:

Induction motors are widely used in different industry areas and can experience various kinds of faults in stators and rotors. In general, fault detection and diagnosis techniques for induction motors can be supervised by measuring quantities such as noise, vibration, and temperature. The installation of mechanical sensors in order to assess the health conditions of a machine is typically only done for expensive or load-critical machines, where the high cost of a continuous monitoring system can be Justified. Nevertheless, induced current monitoring can be implemented inexpensively on machines with arbitrary sizes by using current transformers. In this regard, effective and low-cost fault detection techniques can be implemented, hence reducing the maintenance and downtime costs of motors. This work proposes a method for fault detection and diagnosis of induction motors, which combines classical fast Fourier transform and modern/advanced machine learning techniques. The proposed method is validated on real-world data and achieves a precision of 99.7% for fault detection and 100% for fault classification with minimal expert knowledge requirement. In addition, this approach allows users to be able to optimize/balance risks and maintenance costs to achieve the highest bene t based on their requirements. These are the key requirements of a robust prognostics and health management system.

Keywords: fault detection, FFT, induction motor, predictive maintenance

Procedia PDF Downloads 149
14257 A Case Study on the Numerical-Probability Approach for Deep Excavation Analysis

Authors: Komeil Valipourian

Abstract:

Urban advances and the growing need for developing infrastructures has increased the importance of deep excavations. In this study, after the introducing probability analysis as an important issue, an attempt has been made to apply it for the deep excavation project of Bangkok’s Metro as a case study. For this, the numerical probability model has been developed based on the Finite Difference Method and Monte Carlo sampling approach. The results indicate that disregarding the issue of probability in this project will result in an inappropriate design of the retaining structure. Therefore, probabilistic redesign of the support is proposed and carried out as one of the applications of probability analysis. A 50% reduction in the flexural strength of the structure increases the failure probability just by 8% in the allowable range and helps improve economic conditions, while maintaining mechanical efficiency. With regard to the lack of efficient design in most deep excavations, by considering geometrical and geotechnical variability, an attempt was made to develop an optimum practical design standard for deep excavations based on failure probability. On this basis, a practical relationship is presented for estimating the maximum allowable horizontal displacement, which can help improve design conditions without developing the probability analysis.

Keywords: numerical probability modeling, deep excavation, allowable maximum displacement, finite difference method (FDM)

Procedia PDF Downloads 110
14256 Determination of Pesticides Residues in Tissue of Two Freshwater Fish Species by Modified QuEChERS Method

Authors: Iwona Cieślik, Władysław Migdał, Kinga Topolska, Ewa Cieślik

Abstract:

The consumption of fish is recommended as a means of preventing serious diseases, especially cardiovascular problems. Fish is known to be a valuable source of protein (rich in essential amino acids), unsaturated fatty acids, fat-soluble vitamins, macro- and microelements. However, it can also contain several contaminants (e.g. pesticides, heavy metals) that may pose considerable risks for humans. Among others, pesticide are of special concern. Their widespread use has resulted in the contamination of environmental compartments, including water. The occurrence of pesticides in the environment is a serious problem, due to their potential toxicity. Therefore, a systematic monitoring is needed. The aim of the study was to determine the organochlorine and organophosphate pesticide residues in fish muscle tissues of the pike (Esox lucius, L.) and the rainbow trout (Oncorhynchus mykkis, Walbaum) by a modified QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method, using Gas Chromatography Quadrupole Mass Spectrometry (GC/Q-MS), working in selected-ion monitoring (SIM) mode. The analysis of α-HCH, β-HCH, lindane, diazinon, disulfoton, δ-HCH, methyl parathion, heptachlor, malathion, aldrin, parathion, heptachlor epoxide, γ-chlordane, endosulfan, α-chlordane, o,p'-DDE, dieldrin, endrin, 4,4'-DDD, ethion, endrin aldehyde, endosulfan sulfate, 4,4'-DDT, and metoxychlor was performed in the samples collected in the Carp Valley (Malopolska region, Poland). The age of the pike (n=6) was 3 years and its weight was 2-3 kg, while the age of the rainbow trout (n=6) was 0.5 year and its weight was 0.5-1.0 kg. Detectable pesticide (HCH isomers, endosulfan isomers, DDT and its metabolites as well as metoxychlor) residues were present in fish samples. However, all these compounds were below the limit of quantification (LOQ). The other examined pesticide residues were below the limit of detection (LOD). Therefore, the levels of contamination were - in all cases - below the default Maximum Residue Levels (MRLs), established by Regulation (EC) No 396/2005 of the European Parliament and of the Council. The monitoring of pesticide residues content in fish is required to minimize potential adverse effects on the environment and human exposure to these contaminants.

Keywords: contaminants, fish, pesticides residues, QuEChERS method

Procedia PDF Downloads 197
14255 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures

Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.

Abstract:

Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.

Keywords: CFD, reacting flow, DDT, gas explosion

Procedia PDF Downloads 73
14254 Seismic Response of Structure Using a Three Degree of Freedom Shake Table

Authors: Ketan N. Bajad, Manisha V. Waghmare

Abstract:

Earthquakes are the biggest threat to the civil engineering structures as every year it cost billions of dollars and thousands of deaths, around the world. There are various experimental techniques such as pseudo-dynamic tests – nonlinear structural dynamic technique, real time pseudo dynamic test and shaking table test method that can be employed to verify the seismic performance of structures. Shake table is a device that is used for shaking structural models or building components which are mounted on it. It is a device that simulates a seismic event using existing seismic data and nearly truly reproducing earthquake inputs. This paper deals with the use of shaking table test method to check the response of structure subjected to earthquake. The various types of shake table are vertical shake table, horizontal shake table, servo hydraulic shake table and servo electric shake table. The goal of this experiment is to perform seismic analysis of a civil engineering structure with the help of 3 degree of freedom (i.e. in X Y Z direction) shake table. Three (3) DOF shaking table is a useful experimental apparatus as it imitates a real time desired acceleration vibration signal for evaluating and assessing the seismic performance of structure. This study proceeds with the proper designing and erection of 3 DOF shake table by trial and error method. The table is designed to have a capacity up to 981 Newton. Further, to study the seismic response of a steel industrial building, a proportionately scaled down model is fabricated and tested on the shake table. The accelerometer is mounted on the model, which is used for recording the data. The experimental results obtained are further validated with the results obtained from software. It is found that model can be used to determine how the structure behaves in response to an applied earthquake motion, but the model cannot be used for direct numerical conclusions (such as of stiffness, deflection, etc.) as many uncertainties involved while scaling a small-scale model. The model shows modal forms and gives the rough deflection values. The experimental results demonstrate shake table as the most effective and the best of all methods available for seismic assessment of structure.

Keywords: accelerometer, three degree of freedom shake table, seismic analysis, steel industrial shed

Procedia PDF Downloads 126
14253 Life Cycle Assessment of a Parabolic Solar Cooker

Authors: Bastien Sanglard, Lou Magnat, Ligia Barna, Julian Carrey, Sébastien Lachaize

Abstract:

Cooking is a primary need for humans, several techniques being used around the globe based on different sources of energy: electricity, solid fuel (wood, coal...), fuel or liquefied petroleum gas. However, all of them leads to direct or indirect greenhouse gas emissions and sometimes health damage in household. Therefore, the solar concentrated power represent a great option to lower the damages because of a cleaner using phase. Nevertheless, the construction phase of the solar cooker still requires primary energy and materials, which leads to environmental impacts. The aims of this work is to analyse the ecological impacts of a commercialaluminium parabola and to compare it with other means of cooking, taking the boiling of 2 litres of water three times a day during 40 years as the functional unit. Life cycle assessment was performed using the software Umberto and the EcoInvent database. Calculations were realized over more than 13 criteria using two methods: the international panel on climate change method and the ReCiPe method. For the reflector itself, different aluminium provenances were compared, as well as the use of recycled aluminium. For the structure, aluminium was compared to iron (primary and recycled) and wood. Results show that climate impacts of the studied parabola was 0.0353 kgCO2eq/kWh when built with Chinese aluminium and can be reduced by 4 using aluminium from Canada. Assessment also showed that using 32% of recycled aluminium would reduce the impact by 1.33 and 1.43 compared to the use of primary Canadian aluminium and primary Chinese aluminium, respectively. The exclusive use of recycled aluminium lower the impact by 17. Besides, the use of iron (recycled or primary) or wood for the structure supporting the reflector significantly lowers the impact. The impact categories of the ReCiPe method show that the parabola made from Chinese aluminium has the heaviest impact - except for metal resource depletion - compared to aluminium from Canada, recycled aluminium or iron. Impact of solar cooking was then compared to gas stove and induction. The gas stove model was a cast iron tripod that supports the cooking pot, and the induction plate was as well a single spot plate. Results show the parabolic solar cooker has the lowest ecological impact over the 13 criteria of the ReCiPe method and over the global warming potential compared to the two other technologies. The climate impact of gas cooking is 0.628kgCO2/kWh when used with natural gas and 0.723 kgCO2/kWh when used with a bottle of gas. In each case, the main part of emissions came from gas burning. Induction cooking has a global warming potential of 0.12 kgCO2eq/kWh with the electricity mix of France, 96.3% of the impact being due to electricity production. Therefore, the electricity mix is a key factor for this impact: for instance, with the electricity mix of Germany and Poland, impacts are 0.81kgCO2eq/kWh and 1.39 kgCO2eq/kWh, respectively. Therefore, the parabolic solar cooker has a real ecological advantages compared to both gas stove and induction plate.

Keywords: life cycle assessement, solar concentration, cooking, sustainability

Procedia PDF Downloads 161
14252 Space Vector Pulse Width Modulation Based Design and Simulation of a Three-Phase Voltage Source Converter Systems

Authors: Farhan Beg

Abstract:

A space vector based pulse width modulation control technique for the three-phase PWM converter is proposed in this paper. The proposed control scheme is based on a synchronous reference frame model. High performance and efficiency is obtained with regards to the DC bus voltage and the power factor considerations of the PWM rectifier thus leading to low losses. MATLAB/SIMULINK are used as a platform for the simulations and a SIMULINK model is presented in the paper. The results show that the proposed model demonstrates better performance and properties compared to the traditional SPWM method and the method improves the dynamic performance of the closed loop drastically. For the space vector based pulse width modulation, sine signal is the reference waveform and triangle waveform is the carrier waveform. When the value of sine signal is larger than triangle signal, the pulse will start producing to high; and then when the triangular signals higher than sine signal, the pulse will come to low. SPWM output will change by changing the value of the modulation index and frequency used in this system to produce more pulse width. When more pulse width is produced, the output voltage will have lower harmonics contents and the resolution will increase.

Keywords: power factor, SVPWM, PWM rectifier, SPWM

Procedia PDF Downloads 321