Search results for: numerical computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4476

Search results for: numerical computing

276 Ship Roll Reduction Using Water-Flow Induced Coriolis Effect

Authors: Mario P. Walker, Masaaki Okuma

Abstract:

Ships are subjected to motions which can disrupt on-board operations and damage equipment. Roll motion, in particular, is of great interest due to low damping conditions which may lead to capsizing. Therefore finding ways to reduce this motion is important in ship designs. Several techniques have been investigated to reduce rolling. These include the commonly used anti-roll tanks, fin stabilizers and bilge keels. However, these systems are not without their challenges. For example, water-flow in anti-roll tanks creates complications, and for fin stabilizers and bilge keels, an extremely large size is required to produce any significant damping creating operational challenges. Additionally, among these measures presented above only anti-roll tanks are effective in zero forward motion of the vessels. This paper proposes and investigates a method to reduce rolling by inducing Coriolis effect using water-flow in the radial direction. Motion in the radial direction of a rolling structure will induce Coriolis force and, depending on the direction of flow will either amplify or attenuate the structure. The system is modelled with two degrees of freedom, having rotational motion for parametric rolling and radial motion of the water-flow. Equations of motion are derived and investigated. Numerical examples are analyzed in detail. To demonstrate applicability parameters from a Ro-Ro vessel are used as extensive research have been conducted on these over the years. The vessel is investigated under free and forced roll conditions. Several models are created using various masses, heights, and velocities of water-flow at a given time. The proposed system was found to produce substantial roll reduction which increases with increase in any of the parameters varied as stated above, with velocity having the most significant effect. The proposed system provides a simple approach to reduce ship rolling. Water-flow control is very simple as the water flows in only one direction with constant velocity. Only needing to control the time at which the system should be turned on or off. Furthermore, the proposed system is effective in both forward and zero forward motion of the ship, and provides no hydrodynamic drag. This is a starting point for designing an effective and practical system. For this to be a viable approach further investigations are needed to address challenges that present themselves.

Keywords: Coriolis effect, damping, rolling, water-flow

Procedia PDF Downloads 449
275 The Investigation of Work Stress and Burnout in Nurse Anesthetists: A Cross-Sectional Study

Authors: Yen Ling Liu, Shu-Fen Wu, Chen-Fuh Lam, I-Ling Tsai, Chia-Yu Chen

Abstract:

Purpose: Nurse anesthetists are confronting extraordinarily high job stress in their daily practice, deriving from the fast-track anesthesia care, risk of perioperative complications, routine rotating shifts, teaching programs and interactions with the surgical team in the operating room. This study investigated the influence of work stress on the burnout and turnover intention of nurse anesthetists in a regional general hospital in Southern Taiwan. Methods: This was a descriptive correlational study carried out in 66 full-time nurse anesthetists. Data was collected from March 2017 to June 2017 by in-person interview, and a self-administered structured questionnaire was completed by the interviewee. Outcome measurements included the Practice Environment Scale of the Nursing Work Index (PES-NWI), Maslach Burnout Inventory (MBI) and nursing staff turnover intention. Numerical data were analyzed by descriptive statistics, independent t test, or one-way ANOVA. Categorical data were compared using the chi-square test (x²). Datasets were computed with Pearson product-moment correlation and linear regression. Data were analyzed by using SPSS 20.0 software. Results: The average score for job burnout was 68.7916.67 (out of 100). The three major components of burnout, including emotional depletion (mean score of 26.32), depersonalization (mean score of 13.65), and personal(mean score of 24.48). These average scores suggested that these nurse anesthetists were at high risk of burnout and inversely correlated with turnover intention (t = -4.048, P < 0.05). Using linear regression model, emotional exhaustion and depersonalization were the two independent factors that predicted turnover intention in the nurse anesthetists (19.1% in total variance). Conclusion/Implications for Practice: The study identifies that the high risk of job burnout in the nurse anesthetists is not simply derived from physical overload, but most likely resulted from the additional emotional and psychological stress. The occurrence of job burnout may affect the quality of nursing work, and also influence family harmony, in turn, may increase the turnover rate. Multimodal approach is warranted to reduce work stress and job burnout in nurse anesthetists to enhance their willingness to contribute in anesthesia care.

Keywords: anesthesia nurses, burnout, job, turnover intention

Procedia PDF Downloads 293
274 Intensity of Dyspnea and Anxiety in Seniors in the Terminal Phase of the Disease

Authors: Mariola Głowacka

Abstract:

Aim: The aim of this study was to present the assessment of dyspnea and anxiety in seniors staying in the hospice in the context of the nurse's tasks. Materials and methods: The presented research was carried out at the "Hospicjum Płockie" Association of St. Urszula Ledóchowska in Płock, in a stationary ward, for adults. The research group consisted of 100 people, women, and men. In the study described in this paper, the method of diagnostic survey, the method of estimation and analysis of patient records were used, and the research tools were the numerical scale of the NRS assessment, the modified Borg scale to assess dyspnea, the Trait Anxiety scale to test the intensity of anxiety and the sociodemographic assessment of the respondent. Results: Among the patients, the greatest number were people without dyspnoea (38 people) and with average levels of dyspnoea (26 people). People with lung cancer had a higher level of breathlessness than people with other cancers. Half of the patients included in the study felt anxiety at a low level. On average, men had a higher level of anxiety than women. Conclusion: 1) Patients staying in the hospice require comprehensive nursing care due to the underlying disease, comorbidities, and a wide range of medications taken, which aggravate the feeling of dyspnea and anxiety. 2) The study showed that in patients staying in the hospice, the level of dyspnea was of varying severity. The greatest number of people were without dyspnea (38) and patients with a low level of dyspnea (34). There were 12 people experiencing an average level of dyspnea and a high level of dyspnea 15. 3) The main factor influencing the severity of dyspnea in patients was the location of cancer. There was no significant relationship between the intensity of dyspnea and the age, gender of the patient, and time from diagnosis. 4) The study showed that in patients staying in the hospice, the level of anxiety was of varying severity. Most people experience a low level of anxiety (51). There were 16 people with a high level of anxiety, while there were 33 people experiencing anxiety at an average level. 5) The patient's gender was the main factor influencing the increase in anxiety intensity. Men had higher levels of anxiety than women. There was no significant correlation between the intensity of anxiety and the age of the respondents, as well as the type of cancer and time since diagnosis. 6) The intensity of dyspnea depended on the type of cancer the subjects had. People with lung cancer had a higher level of breathlessness than those with breast cancer and bowel cancer. It was not found that the anxiety increased depending on the type of cancer and comorbidities of the examined person.

Keywords: cancer, shortness of breath, anxiety, senior, hospice

Procedia PDF Downloads 93
273 Modeling the Downstream Impacts of River Regulation on the Grand Lake Meadows Complex using Delft3D FM Suite

Authors: Jaime Leavitt, Katy Haralampides

Abstract:

Numerical modelling has been used to investigate the long-term impact of a large dam on downstream wetland areas, specifically in terms of changing sediment dynamics in the system. The Mactaquac Generating Station (MQGS) is a 672MW run-of-the-river hydroelectric facility, commissioned in 1968 on the mainstem of the Wolastoq|Saint John River in New Brunswick, Canada. New Brunswick Power owns and operates the dam and has been working closely with the Canadian Rivers Institute at UNB Fredericton on a multi-year, multi-disciplinary project investigating the impact the dam has on its surrounding environment. With focus on the downstream river, this research discusses the initialization, set-up, calibration, and preliminary results of a 2-D hydrodynamic model using the Delft3d Flexible Mesh Suite (successor of the Delft3d 4 Suite). The flexible mesh allows the model grid to be structured in the main channel and unstructured in the floodplains and other downstream regions with complex geometry. The combination of grid types improves computational time and output. As the movement of water governs the movement of sediment, the calibrated and validated hydrodynamic model was applied to sediment transport simulations, particularly of the fine suspended sediments. Several provincially significant Protected Natural Areas and federally significant National Wildlife Areas are located 60km downstream of the MQGS. These broad, low-lying floodplains and wetlands are known as the Grand Lake Meadows Complex (GLM Complex). There is added pressure to investigate the impacts of river regulation on these protected regions that rely heavily on natural river processes like sediment transport and flooding. It is hypothesized that the fine suspended sediment would naturally travel to the floodplains for nutrient deposition and replenishment, particularly during the freshet and large storms. The purpose of this research is to investigate the impacts of river regulation on downstream environments and use the model as a tool for informed decision making to protect and maintain biologically productive wetlands and floodplains.

Keywords: hydrodynamic modelling, national wildlife area, protected natural area, sediment transport.

Procedia PDF Downloads 4
272 Improved Soil and Snow Treatment with the Rapid Update Cycle Land-Surface Model for Regional and Global Weather Predictions

Authors: Tatiana G. Smirnova, Stan G. Benjamin

Abstract:

Rapid Update Cycle (RUC) land surface model (LSM) was a land-surface component in several generations of operational weather prediction models at the National Center for Environment Prediction (NCEP) at the National Oceanic and Atmospheric Administration (NOAA). It was designed for short-range weather predictions with an emphasis on severe weather and originally was intentionally simple to avoid uncertainties from poorly known parameters. Nevertheless, the RUC LSM, when coupled with the hourly-assimilating atmospheric model, can produce a realistic evolution of time-varying soil moisture and temperature, as well as the evolution of snow cover on the ground surface. This result is possible only if the soil/vegetation/snow component of the coupled weather prediction model has sufficient skill to avoid long-term drift. RUC LSM was first implemented in the operational NCEP Rapid Update Cycle (RUC) weather model in 1998 and later in the Weather Research Forecasting Model (WRF)-based Rapid Refresh (RAP) and High-resolution Rapid Refresh (HRRR). Being available to the international WRF community, it was implemented in operational weather models in Austria, New Zealand, and Switzerland. Based on the feedback from the US weather service offices and the international WRF community and also based on our own validation, RUC LSM has matured over the years. Also, a sea-ice module was added to RUC LSM for surface predictions over the Arctic sea-ice. Other modifications include refinements to the snow model and a more accurate specification of albedo, roughness length, and other surface properties. At present, RUC LSM is being tested in the regional application of the Unified Forecast System (UFS). The next generation UFS-based regional Rapid Refresh FV3 Standalone (RRFS) model will replace operational RAP and HRRR at NCEP. Over time, RUC LSM participated in several international model intercomparison projects to verify its skill using observed atmospheric forcing. The ESM-SnowMIP was the last of these experiments focused on the verification of snow models for open and forested regions. The simulations were performed for ten sites located in different climatic zones of the world forced with observed atmospheric conditions. While most of the 26 participating models have more sophisticated snow parameterizations than in RUC, RUC LSM got a high ranking in simulations of both snow water equivalent and surface temperature. However, ESM-SnowMIP experiment also revealed some issues in the RUC snow model, which will be addressed in this paper. One of them is the treatment of grid cells partially covered with snow. RUC snow module computes energy and moisture budgets of snow-covered and snow-free areas separately by aggregating the solutions at the end of each time step. Such treatment elevates the importance of computing in the model snow cover fraction. Improvements to the original simplistic threshold-based approach have been implemented and tested both offline and in the coupled weather model. The detailed description of changes to the snow cover fraction and other modifications to RUC soil and snow parameterizations will be described in this paper.

Keywords: land-surface models, weather prediction, hydrology, boundary-layer processes

Procedia PDF Downloads 86
271 Performance of a Sailing Vessel with a Solid Wing Sail Compared to a Traditional Sail

Authors: William Waddington, M. Jahir Rizvi

Abstract:

Sail used to propel a vessel functions in a similar way to an aircraft wing. Traditionally, cloth and ropes were used to produce sails. However, there is one major problem with traditional sail design, the increase in turbulence and flow separation when compared to that of an aircraft wing with the same camber. This has led to the development of the solid wing sail focusing mainly on the sail shape. Traditional cloth sails are manufactured as a single element whereas solid wing sail is made of two segments. To the authors’ best knowledge, the phenomena behind the performances of this type of sail at various angles of wind direction with respect to a sailing vessel’s direction (known as the angle of attack) is still an area of mystery. Hence, in this study, the thrusts of a sailing vessel produced by wing sails constructed with various angles (22°, 24°, 26° and 28°) between the two segments have been compared to that of a traditional cloth sail made of carbon-fiber material. The reason for using carbon-fiber material is to achieve the correct and the exact shape of a commercially available mainsail. NACA 0024 and NACA 0016 foils have been used to generate two-segment wing sail shape which incorporates a flap between the first and the second segments. Both the two-dimensional and the three-dimensional sail models designed in commercial CAD software Solidworks have been analyzed through Computational Fluid Dynamics (CFD) techniques using Ansys CFX considering an apparent wind speed of 20.55 knots with an apparent wind angle of 31°. The results indicate that the thrust from traditional sail increases from 8.18 N to 8.26 N when the angle of attack is increased from 5° to 7°. However, the thrust value decreases if the angle of attack is further increased. A solid wing sail which possesses 20° angle between its two segments, produces thrusts from 7.61 N to 7.74 N with an increase in the angle of attack from 7° to 8°. The thrust remains steady up to 9° angle of attack and drops dramatically beyond 9°. The highest thrust values that can be obtained for the solid wing sails with 22°, 24°, 26° and 28° angle respectively between the two segments are 8.75 N, 9.10 N, 9.29 N and 9.19 N respectively. The optimum angle of attack for each of the solid wing sails is identified as 7° at which these thrust values are obtained. Therefore, it can be concluded that all the thrust values predicted for the solid wing sails of angles between the two segments above 20° are higher compared to the thrust predicted for the traditional sail. However, the best performance from a solid wing sail is expected when the sail is created with an angle between the two segments above 20° but below or equal to 26°. In addition, 1/29th scale models in the wind tunnel have been tested to observe the flow behaviors around the sails. The experimental results support the numerical observations as the flow behaviors are exactly the same.

Keywords: CFD, drag, sailing vessel, thrust, traditional sail, wing sail

Procedia PDF Downloads 277
270 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 267
269 Valuing Social Sustainability in Agriculture: An Approach Based on Social Outputs’ Shadow Prices

Authors: Amer Ait Sidhoum

Abstract:

Interest in sustainability has gained ground among practitioners, academics and policy-makers due to growing stakeholders’ awareness of environmental and social concerns. This is particularly true for agriculture. However, relatively little research has been conducted on the quantification of social sustainability and the contribution of social issues to the agricultural production efficiency. This research's main objective is to propose a method for evaluating prices of social outputs, more precisely shadow prices, by allowing for the stochastic nature of agricultural production that is to say for production uncertainty. In this article, the assessment of social outputs’ shadow prices is conducted within the methodological framework of nonparametric Data Envelopment Analysis (DEA). An output-oriented directional distance function (DDF) is implemented to represent the technology of a sample of Catalan arable crop farms and derive the efficiency scores the overall production technology of our sample is assumed to be the intersection of two different sub-technologies. The first sub-technology models the production of random desirable agricultural outputs, while the second sub-technology reflects the social outcomes from agricultural activities. Once a nonparametric production technology has been represented, the DDF primal approach can be used for efficiency measurement, while shadow prices are drawn from the dual representation of the DDF. Computing shadow prices is a method to assign an economic value to non-marketed social outcomes. Our research uses cross sectional, farm-level data collected in 2015 from a sample of 180 Catalan arable crop farms specialized in the production of cereals, oilseeds and protein (COP) crops. Our results suggest that our sample farms show high performance scores, from 85% for the bad state of nature to 88% for the normal and ideal crop growing conditions. This suggests that farm performance is increasing with an improvement in crop growth conditions. Results also show that average shadow prices of desirable state-contingent output and social outcomes for efficient and inefficient farms are positive, suggesting that the production of desirable marketable outputs and of non-marketable outputs makes a positive contribution to the farm production efficiency. Results also indicate that social outputs’ shadow prices are contingent upon the growing conditions. The shadow prices follow an upward trend as crop-growing conditions improve. This finding suggests that these efficient farms prefer to allocate more resources in the production of desirable outputs than of social outcomes. To our knowledge, this study represents the first attempt to compute shadow prices of social outcomes while accounting for the stochastic nature of the production technology. Our findings suggest that the decision-making process of the efficient farms in dealing with social issues are stochastic and strongly dependent on the growth conditions. This implies that policy-makers should adjust their instruments according to the stochastic environmental conditions. An optimal redistribution of rural development support, by increasing the public payment with the improvement in crop growth conditions, would likely enhance the effectiveness of public policies.

Keywords: data envelopment analysis, shadow prices, social sustainability, sustainable farming

Procedia PDF Downloads 125
268 Understanding the Effect of Material and Deformation Conditions on the “Wear Mode Diagram”: A Numerical Study

Authors: A. Mostaani, M. P. Pereira, B. F. Rolfe

Abstract:

The increasing application of Advanced High Strength Steel (AHSS) in the automotive industry to fulfill crash requirements has introduced higher levels of wear in stamping dies and parts. Therefore, understanding wear behaviour in sheet metal forming is of great importance as it can help to reduce the high costs currently associated with tool wear. At the contact between the die and the sheet, the tips of hard tool asperities interact with the softer sheet material. Understanding the deformation that occurs during this interaction is important for our overall understanding of the wear mechanisms. For these reasons, the scratching of a perfectly plastic material by a rigid indenter has been widely examined in the literature; with finite element modelling (FEM) used in recent years to further understand the behaviour. The ‘wear mode diagram’ has been commonly used to classify the deformation regime of the soft work-piece during scratching, into three modes: ploughing, wedge formation, and cutting. This diagram, which is based on 2D slip line theory and upper bound method for perfectly plastic work-piece and rigid indenter, relates different wear modes to attack angle and interfacial strength. This diagram has been the basis for many wear studies and wear models to date. Additionally, it has been concluded that galling is most likely to occur during the wedge formation mode. However, there has been little analysis in the literature of how the material behaviour and deformation conditions associated with metal forming processes influence the wear behaviour. Therefore, the first aim of this work is first to use a commercial FEM package (Abaqus/Explicit) to build a 3D model to capture wear modes during scratching with indenters with different attack angles and different interfacial strengths. The second goal is to utilise the developed model to understand how wear modes might change in the presence of bulk deformation of the work-piece material as a result of the metal forming operation. Finally, the effect of the work-piece material properties, including strain hardening, will be examined to understand how these influence the wear modes and wear behaviour. The results show that both strain hardening and substrate deformation can change the critical attack angle at which the wedge formation regime is activated.

Keywords: finite element, pile-up, scratch test, wear mode

Procedia PDF Downloads 327
267 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study

Authors: Thomas Arink, Isam Janajreh

Abstract:

The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).

Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires

Procedia PDF Downloads 519
266 Prediction of Fluid Induced Deformation using Cavity Expansion Theory

Authors: Jithin S. Kumar, Ramesh Kannan Kandasami

Abstract:

Geomaterials are generally porous in nature due to the presence of discrete particles and interconnected voids. The porosity present in these geomaterials play a critical role in many engineering applications such as CO2 sequestration, well bore strengthening, enhanced oil and hydrocarbon recovery, hydraulic fracturing, and subsurface waste storage. These applications involves solid-fluid interactions, which govern the changes in the porosity which in turn affect the permeability and stiffness of the medium. Injecting fluid into the geomaterials results in permeation which exhibits small or negligible deformation of the soil skeleton followed by cavity expansion/ fingering/ fracturing (different forms of instabilities) due to the large deformation especially when the flow rate is greater than the ability of the medium to permeate the fluid. The complexity of this problem increases as the geomaterial behaves like a solid and fluid under certain conditions. Thus it is important to understand this multiphysics problem where in addition to the permeation, the elastic-plastic deformation of the soil skeleton plays a vital role during fluid injection. The phenomenon of permeation and cavity expansion in porous medium has been studied independently through extensive experimental and analytical/ numerical models. The analytical models generally use Darcy's/ diffusion equations to capture the fluid flow during permeation while elastic-plastic (Mohr-Coulomb and Modified Cam-Clay) models were used to predict the solid deformations. Hitherto, the research generally focused on modelling cavity expansion without considering the effect of injected fluid coming into the medium. Very few studies have considered the effect of injected fluid on the deformation of soil skeleton. However, the porosity changes during the fluid injection and coupled elastic-plastic deformation are not clearly understood. In this study, the phenomenon of permeation and instabilities such as cavity and finger/ fracture formation will be quantified extensively by performing experiments using a novel experimental setup in addition to utilizing image processing techniques. This experimental study will describe the fluid flow and soil deformation characteristics under different boundary conditions. Further, a well refined coupled semi-analytical model will be developed to capture the physics involved in quantifying the deformation behaviour of geomaterial during fluid injection.

Keywords: solid-fluid interaction, permeation, poroelasticity, plasticity, continuum model

Procedia PDF Downloads 73
265 Educational Framework for Coaches on Injury Prevention in Adolescent Team Sports

Authors: Chantell Gouws, Lourens Millard, Anne Naude, Jan-Wessel Meyer, Brandon Stuwart Shaw, Ina Shaw

Abstract:

Background: Millions of South African youths participate in team sports, with netball and rugby being two of the largest worldwide. This increased participation and professionalism have resulted in an increase in the number of musculoskeletal injuries. Objective: This study examined the extent to which sport coaching knowledge translates to the injuries and prevention of injuries in adolescents participating in netball and rugby. Methods: Thirty-four South African sports coaches participated in the study. Eighteen netball coaches and 16 rugby coaches with varying levels of coaching experience were selected to participate. An adapted version of Nash and Sproule’s questionnaire was used to investigate the coaches’ knowledge with regards to sport-specific common injuries, injury prevention, fitness/conditioning, individual technique development, training programs, mental training, and preparation of players. The analysis of data was carried out using a number of different techniques outlined by Nash and Sproule (2012). These techniques were determined by the type of data. Descriptive data was used to provide statistical analysis. Quantitative data was used to determine the educational framework and knowledge of sports coaches on injury prevention. Numerical data was obtained through questions on sports injuries, as well as coaches’ sports knowledge levels. Participants’ knowledge was measured using a standardized scoring system. Results: For the 0-4 years of netball coaching experience, 76.4% of the coaches had knowledge and experience and 33.3% appropriate first aid knowledge, while for the 9-12 years and 13-16 years, 100% of the coaches had knowledge and experience and first aid knowledge. For the 0-4 years in rugby coaching experience, 59.1% had knowledge and experience and 71% the appropriate first aid knowledge; for the 17-20 years, 100% had knowledge and experience and first aid, while for higher or equal to 25 years, 45.5% had knowledge and experience. In netball, 90% of injuries consisted of ankle injuries, followed by 70% for knee, 50% for shoulder, 20% for lower leg, and 15% for finger injuries. In rugby, 81% of the injuries occurred at the knee, followed by 50% for the shoulder, 40% for the ankle, 31% for the head and neck, and 25% for hamstring injuries. Six hours of training resulted in a 13% chance of injuries in netball and a 32% chance in rugby. For 10 hours of training, the injury prevalence was 10% in netball and 17% in rugby, while 15 hours resulted in an injury incidence of 58% in netball players and a 25% chance in rugby players. Conclusion: This study highlights the need for coaches to improve their knowledge in relation to injuries and injury prevention, along with factors that act as a preventative measure and promotes players’ well-being.

Keywords: musculoskeletal injury, sport coaching, sport trauma

Procedia PDF Downloads 159
264 Proposed Design of an Optimized Transient Cavity Picosecond Ultraviolet Laser

Authors: Marilou Cadatal-Raduban, Minh Hong Pham, Duong Van Pham, Tu Nguyen Xuan, Mui Viet Luong, Kohei Yamanoi, Toshihiko Shimizu, Nobuhiko Sarukura, Hung Dai Nguyen

Abstract:

There is a great deal of interest in developing all-solid-state tunable ultrashort pulsed lasers emitting in the ultraviolet (UV) region for applications such as micromachining, investigation of charge carrier relaxation in conductors, and probing of ultrafast chemical processes. However, direct short-pulse generation is not as straight forward in solid-state gain media as it is for near-IR tunable solid-state lasers such as Ti:sapphire due to the difficulty of obtaining continuous wave laser operation, which is required for Kerr lens mode-locking schemes utilizing spatial or temporal Kerr type nonlinearity. In this work, the transient cavity method, which was reported to generate ultrashort laser pulses in dye lasers, is extended to a solid-state gain medium. Ce:LiCAF was chosen among the rare-earth-doped fluoride laser crystals emitting in the UV region because of its broad tunability (from 280 to 325 nm) and enough bandwidth to generate 3-fs pulses, sufficiently large effective gain cross section (6.0 x10⁻¹⁸ cm²) favorable for oscillators, and a high saturation fluence (115 mJ/cm²). Numerical simulations are performed to investigate the spectro-temporal evolution of the broadband UV laser emission from Ce:LiCAF, represented as a system of two homogeneous broadened singlet states, by solving the rate equations extended to multiple wavelengths. The goal is to find the appropriate cavity length and Q-factor to achieve the optimal photon cavity decay time and pumping energy for resonator transients that will lead to ps UV laser emission from a Ce:LiCAF crystal pumped by the fourth harmonics (266nm) of a Nd:YAG laser. Results show that a single ps pulse can be generated from a 1-mm, 1 mol% Ce³⁺-doped LiCAF crystal using an output coupler with 10% reflectivity (low-Q) and an oscillator cavity that is 2-mm long (short cavity). This technique can be extended to other fluoride-based solid-state laser gain media.

Keywords: rare-earth-doped fluoride gain medium, transient cavity, ultrashort laser, ultraviolet laser

Procedia PDF Downloads 354
263 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak

Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi

Abstract:

This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.

Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak

Procedia PDF Downloads 152
262 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling

Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng

Abstract:

This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.

Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT

Procedia PDF Downloads 83
261 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding

Authors: Wenya Shu, Ilinca Stanciulescu

Abstract:

Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.

Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding

Procedia PDF Downloads 129
260 Preliminary WRF SFIRE Simulations over Croatia during the Split Wildfire in July 2017

Authors: Ivana Čavlina Tomašević, Višnjica Vučetić, Maja Telišman Prtenjak, Barbara Malečić

Abstract:

The Split wildfire on the mid-Adriatic Coast in July 2017 is one of the most severe wildfires in Croatian history, given the size and unexpected fire behavior, and it is used in this research as a case study to run the Weather Research and Forecasting Spread Fire (WRF SFIRE) model. This coupled fire-atmosphere model was successfully run for the first time ever for one Croatian wildfire case. Verification of coupled simulations was possible by using the detailed reconstruction of the Split wildfire. Specifically, precise information on ignition time and location, together with mapped fire progressions and spotting within the first 30 hours of the wildfire, was used for both – to initialize simulations and to evaluate the model’s ability to simulate fire’s propagation and final fire scar. The preliminary simulations were obtained using high-resolution vegetation and topography data for the fire area, additionally interpolated to fire grid spacing at 33.3 m. The results demonstrated that the WRF SFIRE model has the ability to work with real data from Croatia and produce adequate results for forecasting fire spread. As the model in its setup has the ability to include and exclude the energy fluxes between the fire and the atmosphere, this was used to investigate possible fire-atmosphere interactions during the Split wildfire. Finally, successfully coupled simulations provided the first numerical evidence that a wildfire from the Adriatic coast region can modify the dynamical structure of the surrounding atmosphere, which agrees with observations from fire grounds. This study has demonstrated that the WRF SFIRE model has the potential for operational application in Croatia with more accurate fire predictions in the future, which could be accomplished by inserting the higher-resolution input data into the model without interpolation. Possible uses for fire management in Croatia include prediction of fire spread and intensity that may vary under changing weather conditions, available fuels and topography, planning effective and safe deployment of ground and aerial firefighting forces, preventing wildland-urban interface fires, effective planning of evacuation routes etc. In addition, the WRF SFIRE model results from this research demonstrated that the model is important for fire weather research and education purposes in order to better understand this hazardous phenomenon that occurs in Croatia.

Keywords: meteorology, agrometeorology, fire weather, wildfires, couple fire-atmosphere model

Procedia PDF Downloads 88
259 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 160
258 Uncertainty Quantification of Fuel Compositions on Premixed Bio-Syngas Combustion at High-Pressure

Authors: Kai Zhang, Xi Jiang

Abstract:

Effect of fuel variabilities on premixed combustion of bio-syngas mixtures is of great importance in bio-syngas utilisation. The uncertainties of concentrations of fuel constituents such as H2, CO and CH4 may lead to unpredictable combustion performances, combustion instabilities and hot spots which may deteriorate and damage the combustion hardware. Numerical modelling and simulations can assist in understanding the behaviour of bio-syngas combustion with pre-defined species concentrations, while the evaluation of variabilities of concentrations is expensive. To be more specific, questions such as ‘what is the burning velocity of bio-syngas at specific equivalence ratio?’ have been answered either experimentally or numerically, while questions such as ‘what is the likelihood of burning velocity when precise concentrations of bio-syngas compositions are unknown, but the concentration ranges are pre-described?’ have not yet been answered. Uncertainty quantification (UQ) methods can be used to tackle such questions and assess the effects of fuel compositions. An efficient probabilistic UQ method based on Polynomial Chaos Expansion (PCE) techniques is employed in this study. The method relies on representing random variables (combustion performances) with orthogonal polynomials such as Legendre or Gaussian polynomials. The constructed PCE via Galerkin Projection provides easy access to global sensitivities such as main, joint and total Sobol indices. In this study, impacts of fuel compositions on combustion (adiabatic flame temperature and laminar flame speed) of bio-syngas fuel mixtures are presented invoking this PCE technique at several equivalence ratios. High-pressure effects on bio-syngas combustion instability are obtained using detailed chemical mechanism - the San Diego Mechanism. Guidance on reducing combustion instability from upstream biomass gasification process is provided by quantifying the significant contributions of composition variations to variance of physicochemical properties of bio-syngas combustion. It was found that flame speed is very sensitive to hydrogen variability in bio-syngas, and reducing hydrogen uncertainty from upstream biomass gasification processes can greatly reduce bio-syngas combustion instability. Variation of methane concentration, although thought to be important, has limited impacts on laminar flame instabilities especially for lean combustion. Further studies on the UQ of percentage concentration of hydrogen in bio-syngas can be conducted to guide the safer use of bio-syngas.

Keywords: bio-syngas combustion, clean energy utilisation, fuel variability, PCE, targeted uncertainty reduction, uncertainty quantification

Procedia PDF Downloads 273
257 Variable Renewable Energy Droughts in the Power Sector – A Model-based Analysis and Implications in the European Context

Authors: Martin Kittel, Alexander Roth

Abstract:

The continuous integration of variable renewable energy sources (VRE) in the power sector is required for decarbonizing the European economy. Power sectors become increasingly exposed to weather variability, as the availability of VRE, i.e., mainly wind and solar photovoltaic, is not persistent. Extreme events, e.g., long-lasting periods of scarce VRE availability (‘VRE droughts’), challenge the reliability of supply. Properly accounting for the severity of VRE droughts is crucial for designing a resilient renewable European power sector. Energy system modeling is used to identify such a design. Our analysis reveals the sensitivity of the optimal design of the European power sector towards VRE droughts. We analyze how VRE droughts impact optimal power sector investments, especially in generation and flexibility capacity. We draw upon work that systematically identifies VRE drought patterns in Europe in terms of frequency, duration, and seasonality, as well as the cross-regional and cross-technological correlation of most extreme drought periods. Based on their analysis, the authors provide a selection of relevant historical weather years representing different grades of VRE drought severity. These weather years will serve as input for the capacity expansion model for the European power sector used in this analysis (DIETER). We additionally conduct robustness checks varying policy-relevant assumptions on capacity expansion limits, interconnections, and level of sector coupling. Preliminary results illustrate how an imprudent selection of weather years may cause underestimating the severity of VRE droughts, flawing modeling insights concerning the need for flexibility. Sub-optimal European power sector designs vulnerable to extreme weather can result. Using relevant weather years that appropriately represent extreme weather events, our analysis identifies a resilient design of the European power sector. Although the scope of this work is limited to the European power sector, we are confident that our insights apply to other regions of the world with similar weather patterns. Many energy system studies still rely on one or a limited number of sometimes arbitrarily chosen weather years. We argue that the deliberate selection of relevant weather years is imperative for robust modeling results.

Keywords: energy systems, numerical optimization, variable renewable energy sources, energy drought, flexibility

Procedia PDF Downloads 71
256 Numerical Modelling and Experiment of a Composite Single-Lap Joint Reinforced by Multifunctional Thermoplastic Composite Fastener

Authors: Wenhao Li, Shijun Guo

Abstract:

Carbon fibre reinforced composites are progressively replacing metal structures in modern civil aircraft. This is because composite materials have large potential of weight saving compared with metal. However, the achievement to date of weight saving in composite structure is far less than the theoretical potential due to many uncertainties in structural integrity and safety concern. Unlike the conventional metallic structure, composite components are bonded together along the joints where structural integrity is a major concern. To ensure the safety, metal fasteners are used to reinforce the composite bonded joints. One of the solutions for a significant weight saving of composite structure is to develop an effective technology of on-board Structural Health Monitoring (SHM) System. By monitoring the real-life stress status of composite structures during service, the safety margin set in the structure design can be reduced with confidence. It provides a means of safeguard to minimize the need for programmed inspections and allow for maintenance to be need-driven, rather than usage-driven. The aim of this paper is to develop smart composite joint. The key technology is a multifunctional thermoplastic composite fastener (MTCF). The MTCF will replace some of the existing metallic fasteners in the most concerned locations distributed over the aircraft composite structures to reinforce the joints and form an on-board SHM network system. Each of the MTCFs will work as a unit of the AU and AE technology. The proposed MTCF technology has been patented and developed by Prof. Guo in Cranfield University, UK in the past a few years. The manufactured MTCF has been successfully employed in the composite SLJ (Single-Lap Joint). In terms of the structure integrity, the hybrid SLJ reinforced by MTCF achieves 19.1% improvement in the ultimate failure strength in comparison to the bonded SLJ. By increasing the diameter or rearranging the lay-up sequence of MTCF, the hybrid SLJ reinforced by MTCF is able to achieve the equivalent ultimate strength as that reinforced by titanium fastener. The predicted ultimate strength in simulation is in good agreement with the test results. In terms of the structural health monitoring, a signal from the MTCF was measured well before the load of mechanical failure. This signal provides a warning of initial crack in the joint which could not be detected by the strain gauge until the final failure.

Keywords: composite single-lap joint, crack propagation, multifunctional composite fastener, structural health monitoring

Procedia PDF Downloads 162
255 3D Simulation of Orthodontic Tooth Movement in the Presence of Horizontal Bone Loss

Authors: Azin Zargham, Gholamreza Rouhi, Allahyar Geramy

Abstract:

One of the most prevalent types of alveolar bone loss is horizontal bone loss (HBL) in which the bone height around teeth is reduced homogenously. In the presence of HBL the magnitudes of forces during orthodontic treatment should be altered according to the degree of HBL, in a way that without further bone loss, desired tooth movement can be obtained. In order to investigate the appropriate orthodontic force system in the presence of HBL, a three-dimensional numerical model capable of the simulation of orthodontic tooth movement was developed. The main goal of this research was to evaluate the effect of different degrees of HBL on a long-term orthodontic tooth movement. Moreover, the effect of different force magnitudes on orthodontic tooth movement in the presence of HBL was studied. Five three-dimensional finite element models of a maxillary lateral incisor with 0 mm, 1.5 mm, 3 mm, 4.5 mm and 6 mm of HBL were constructed. The long-term orthodontic tooth tipping movements were attained during a 4-weeks period in an iterative process through the external remodeling of the alveolar bone based on strains in periodontal ligament as the bone remodeling mechanical stimulus. To obtain long-term orthodontic tooth movement in each iteration, first the strains in periodontal ligament under a 1-N tipping force were calculated using finite element analysis. Then, bone remodeling and the subsequent tooth movement were computed in a post-processing software using a custom written program. Incisal edge, cervical, and apical area displacement in the models with different alveolar bone heights (0, 1.5, 3, 4.5, 6 mm bone loss) in response to a 1-N tipping force were calculated. Maximum tooth displacement was found to be 2.65 mm at the top of the crown of the model with a 6 mm bone loss. Minimum tooth displacement was 0.45 mm at the cervical level of the model with a normal bone support. Tooth tipping degrees of models in response to different tipping force magnitudes were also calculated for models with different degrees of HBL. Degrees of tipping tooth movement increased as force level was increased. This increase was more prominent in the models with smaller degrees of HBL. By using finite element method and bone remodeling theories, this study indicated that in the presence of HBL, under the same load, long-term orthodontic tooth movement will increase. The simulation also revealed that even though tooth movement increases with increasing the force, this increase was only prominent in the models with smaller degrees of HBL, and tooth models with greater degrees of HBL will be less affected by the magnitude of an orthodontic force. Based on our results, the applied force magnitude must be reduced in proportion of degree of HBL.

Keywords: bone remodeling, finite element method, horizontal bone loss, orthodontic tooth movement.

Procedia PDF Downloads 341
254 Laser-Dicing Modeling: Implementation of a High Accuracy Tool for Laser-Grooving and Cutting Application

Authors: Jeff Moussodji, Dominique Drouin

Abstract:

The highly complex technology requirements of today’s integrated circuits (ICs), lead to the increased use of several materials types such as metal structures, brittle and porous low-k materials which are used in both front end of line (FEOL) and back end of line (BEOL) process for wafer manufacturing. In order to singulate chip from wafer, a critical laser-grooving process, prior to blade dicing, is used to remove these layers of materials out of the dicing street. The combination of laser-grooving and blade dicing allows to reduce the potential risk of induced mechanical defects such micro-cracks, chipping, on the wafer top surface where circuitry is located. It seems, therefore, essential to have a fundamental understanding of the physics involving laser-dicing in order to maximize control of these critical process and reduce their undesirable effects on process efficiency, quality, and reliability. In this paper, the study was based on the convergence of two approaches, numerical and experimental studies which allowed us to investigate the interaction of a nanosecond pulsed laser and BEOL wafer materials. To evaluate this interaction, several laser grooved samples were compared with finite element modeling, in which three different aspects; phase change, thermo-mechanical and optic sensitive parameters were considered. The mathematical model makes it possible to highlight a groove profile (depth, width, etc.) of a single pulse or multi-pulses on BEOL wafer material. Moreover, the heat affected zone, and thermo-mechanical stress can be also predicted as a function of laser operating parameters (power, frequency, spot size, defocus, speed, etc.). After modeling validation and calibration, a satisfying correlation between experiment and modeling, results have been observed in terms of groove depth, width and heat affected zone. The study proposed in this work is a first step toward implementing a quick assessment tool for design and debug of multiple laser grooving conditions with limited experiments on hardware in industrial application. More correlations and validation tests are in progress and will be included in the full paper.

Keywords: laser-dicing, nano-second pulsed laser, wafer multi-stack, multiphysics modeling

Procedia PDF Downloads 208
253 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 128
252 Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level

Authors: James Semmens, James Boyd, Anna Ferrante, Katrina Spilsbury, Sean Randall, Adrian Brown

Abstract:

‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.

Keywords: data integration, data linkage, health planning, health services research

Procedia PDF Downloads 215
251 Demarcating Wetting States in Pressure-Driven Flows by Poiseuille Number

Authors: Anvesh Gaddam, Amit Agrawal, Suhas Joshi, Mark Thompson

Abstract:

An increase in surface area to volume ratio with a decrease in characteristic length scale, leads to a rapid increase in pressure drop across the microchannel. Texturing the microchannel surfaces reduce the effective surface area, thereby decreasing the pressured drop. Surface texturing introduces two wetting states: a metastable Cassie-Baxter state and stable Wenzel state. Predicting wetting transition in textured microchannels is essential for identifying optimal parameters leading to maximum drag reduction. Optical methods allow visualization only in confined areas, therefore, obtaining whole-field information on wetting transition is challenging. In this work, we propose a non-invasive method to capture wetting transitions in textured microchannels under flow conditions. To this end, we tracked the behavior of the Poiseuille number Po = f.Re, (with f the friction factor and Re the Reynolds number), for a range of flow rates (5 < Re < 50), and different wetting states were qualitatively demarcated by observing the inflection points in the f.Re curve. Microchannels with both longitudinal and transverse ribs with a fixed gas fraction (δ, a ratio of shear-free area to total area) and at a different confinement ratios (ε, a ratio of rib height to channel height) were fabricated. The measured pressure drop values for all the flow rates across the textured microchannels were converted into Poiseuille number. Transient behavior of the pressure drop across the textured microchannels revealed the collapse of liquid-gas interface into the gas cavities. Three wetting states were observed at ε = 0.65 for both longitudinal and transverse ribs, whereas, an early transition occurred at Re ~ 35 for longitudinal ribs at ε = 0.5, due to spontaneous flooding of the gas cavities as the liquid-gas interface ruptured at the inlet. In addition, the pressure drop in the Wenzel state was found to be less than the Cassie-Baxter state. Three-dimensional numerical simulations confirmed the initiation of the completely wetted Wenzel state in the textured microchannels. Furthermore, laser confocal microscopy was employed to identify the location of the liquid-gas interface in the Cassie-Baxter state. In conclusion, the present method can overcome the limitations posed by existing techniques, to conveniently capture wetting transition in textured microchannels.

Keywords: drag reduction, Poiseuille number, textured surfaces, wetting transition

Procedia PDF Downloads 160
250 Depth-Averaged Modelling of Erosion and Sediment Transport in Free-Surface Flows

Authors: Thomas Rowan, Mohammed Seaid

Abstract:

A fast finite volume solver for multi-layered shallow water flows with mass exchange and an erodible bed is developed. This enables the user to solve a number of complex sediment-based problems including (but not limited to), dam-break over an erodible bed, recirculation currents and bed evolution as well as levy and dyke failure. This research develops methodologies crucial to the under-standing of multi-sediment fluvial mechanics and waterway design. In this model mass exchange between the layers is allowed and, in contrast to previous models, sediment and fluid are able to transfer between layers. In the current study we use a two-step finite volume method to avoid the solution of the Riemann problem. Entrainment and deposition rates are calculated for the first time in a model of this nature. In the first step the governing equations are rewritten in a non-conservative form and the intermediate solutions are calculated using the method of characteristics. In the second stage, the numerical fluxes are reconstructed in conservative form and are used to calculate a solution that satisfies the conservation property. This method is found to be considerably faster than other comparative finite volume methods, it also exhibits good shock capturing. For most entrainment and deposition equations a bed level concentration factor is used. This leads to inaccuracies in both near bed level concentration and total scour. To account for diffusion, as no vertical velocities are calculated, a capacity limited diffusion coefficient is used. The additional advantage of this multilayer approach is that there is a variation (from single layer models) in bottom layer fluid velocity: this dramatically reduces erosion, which is often overestimated in simulations of this nature using single layer flows. The model is used to simulate a standard dam break. In the dam break simulation, as expected, the number of fluid layers utilised creates variation in the resultant bed profile, with more layers offering a higher deviation in fluid velocity . These results showed a marked variation in erosion profiles from standard models. The overall the model provides new insight into the problems presented at minimal computational cost.

Keywords: erosion, finite volume method, sediment transport, shallow water equations

Procedia PDF Downloads 216
249 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities

Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun

Abstract:

The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.

Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids

Procedia PDF Downloads 63
248 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology

Authors: Amarendar Reddy Addula

Abstract:

Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.

Keywords: artificial intelligence, ethics & human rights issues, laws, international laws

Procedia PDF Downloads 93
247 Model Reference Adaptive Approach for Power System Stabilizer for Damping of Power Oscillations

Authors: Jožef Ritonja, Bojan Grčar, Boštjan Polajžer

Abstract:

In recent years, electricity trade between neighboring countries has become increasingly intense. Increasing power transmission over long distances has resulted in an increase in the oscillations of the transmitted power. The damping of the oscillations can be carried out with the reconfiguration of the network or the replacement of generators, but such solution is not economically reasonable. The only cost-effective solution to improve the damping of power oscillations is to use power system stabilizers. Power system stabilizer represents a part of synchronous generator control system. It utilizes semiconductor’s excitation system connected to the rotor field excitation winding to increase the damping of the power system. The majority of the synchronous generators are equipped with the conventional power system stabilizers with fixed parameters. The control structure of the conventional power system stabilizers and the tuning procedure are based on the linear control theory. Conventional power system stabilizers are simple to realize, but they show non-sufficient damping improvement in the entire operating conditions. This is the reason that advanced control theories are used for development of better power system stabilizers. In this paper, the adaptive control theory for power system stabilizers design and synthesis is studied. The presented work is focused on the use of model reference adaptive control approach. Control signal, which assures that the controlled plant output will follow the reference model output, is generated by the adaptive algorithm. Adaptive gains are obtained as a combination of the "proportional" term and with the σ-term extended "integral" term. The σ-term is introduced to avoid divergence of the integral gains. The necessary condition for asymptotic tracking is derived by means of hyperstability theory. The benefits of the proposed model reference adaptive power system stabilizer were evaluated as objectively as possible by means of a theoretical analysis, numerical simulations and laboratory realizations. Damping of the synchronous generator oscillations in the entire operating range was investigated. Obtained results show the improved damping in the entire operating area and the increase of the power system stability. The results of the presented work will help by the development of the model reference power system stabilizer which should be able to replace the conventional stabilizers in power systems.

Keywords: power system, stability, oscillations, power system stabilizer, model reference adaptive control

Procedia PDF Downloads 136