Search results for: linear equations
417 Shoreline Variation with Construction of a Pair of Training Walls, Ponnani Inlet, Kerala, India
Authors: Jhoga Parth, T. Nasar, K. V. Anand
Abstract:
An idealized definition of shoreline is that it is the zone of coincidence of three spheres such as atmosphere, lithosphere, and hydrosphere. Despite its apparent simplicity, this definition in practice a challenge to apply. In reality, the shoreline location deviates continually through time, because of various dynamic factors such as wave characteristics, currents, coastal orientation and the bathymetry, which makes the shoreline volatile. This necessitates us to monitor the shoreline in a temporal basis. If shoreline’s nature is understood at particular coastal stretch, it need not be the same trend at the other location, though belonging to the same sea front. Shoreline change is hence a local phenomenon and has to be studied with great intensity considering as many factors involved as possible. Erosion and accretion of sediment are such natures of a shoreline, which needs to be quantified by comparing with its predeceasing variations and understood before implementing any coastal projects. In recent years, advent of Global Positioning System (GPS) and Geographic Information System (GIS) acts as an emerging tool to quantify the intra and inter annual sediment rate getting accreted or deposited compared to other conventional methods in regards with time was taken and man power. Remote sensing data, on the other hand, paves way to acquire historical sets of data where field data is unavailable with a higher resolution. Short term and long term period shoreline change can be accurately tracked and monitored using a software residing in GIS - Digital Shoreline Analysis System (DSAS) developed by United States Geological Survey (USGS). In the present study, using DSAS, End Point Rate (EPR) is calculated analyze the intra-annual changes, and Linear Rate Regression (LRR) is adopted to study inter annual changes of shoreline. The shoreline changes are quantified for the scenario during the construction of breakwater in Ponnani river inlet along Kerala coast, India. Ponnani is a major fishing and landing center located 10°47’12.81”N and 75°54’38.62”E in Malappuram district of Kerala, India. The rate of erosion and accretion is explored using satellite and field data. The full paper contains the rate of change of shoreline, and its analysis would provide us understanding the behavior of the inlet at the study area during the construction of the training walls.Keywords: DSAS, end point rate, field measurements, geo-informatics, shoreline variation
Procedia PDF Downloads 257416 CSoS-STRE: A Combat System-of-System Space-Time Resilience Enhancement Framework
Authors: Jiuyao Jiang, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge
Abstract:
Modern warfare has transitioned from the paradigm of isolated combat forces to system-to-system confrontations due to advancements in combat technologies and application concepts. A combat system-of-systems (CSoS) is a combat network composed of independently operating entities that interact with one another to provide overall operational capabilities. Enhancing the resilience of CSoS is garnering increasing attention due to its significant practical value in optimizing network architectures, improving network security and refining operational planning. Accordingly, a unified framework called CSoS space-time resilience enhancement (CSoS-STRE) has been proposed, which enhances the resilience of CSoS by incorporating spatial features. Firstly, a multilayer spatial combat network model has been constructed, which incorporates an information layer depicting the interrelations among combat entities based on the OODA loop, along with a spatial layer that considers the spatial characteristics of equipment entities, thereby accurately reflecting the actual combat process. Secondly, building upon the combat network model, a spatiotemporal resilience optimization model is proposed, which reformulates the resilience optimization problem as a classical linear optimization model with spatial features. Furthermore, the model is extended from scenarios without obstacles to those with obstacles, thereby further emphasizing the importance of spatial characteristics. Thirdly, a resilience-oriented recovery optimization method based on improved non dominated sorting genetic algorithm II (R-INSGA) is proposed to determine the optimal recovery sequence for the damaged entities. This method not only considers spatial features but also provides the optimal travel path for multiple recovery teams. Finally, the feasibility, effectiveness, and superiority of the CSoS-STRE are demonstrated through a case study. Simultaneously, under deliberate attack conditions based on degree centrality and maximum operational loop performance, the proposed CSoS-STRE method is compared with six baseline recovery strategies, which are based on performance, time, degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. The comparison demonstrates that CSoS-STRE achieves faster convergence and superior performance.Keywords: space-time resilience enhancement, resilience optimization model, combat system-of-systems, recovery optimization method, no-obstacles and obstacles
Procedia PDF Downloads 15415 A Broadband Tri-Cantilever Vibration Energy Harvester with Magnetic Oscillator
Authors: Xiaobo Rui, Zhoumo Zeng, Yibo Li
Abstract:
A novel tri-cantilever energy harvester with magnetic oscillator was presented, which could convert the ambient vibration into electrical energy to power the low-power devices such as wireless sensor networks. The most common way to harvest vibration energy is based on the use of linear resonant devices such as cantilever beam, since this structure creates the highest strain for a given force. The highest efficiency will be achieved when the resonance frequency of the harvester matches the vibration frequency. The limitation of the structure is the narrow effective bandwidth. To overcome this limitation, this article introduces a broadband tri-cantilever harvester with nonlinear stiffness. This energy harvester typically consists of three thin cantilever beams vertically arranged with Neodymium Magnets ( NdFeB)magnetics at its free end and a fixed base at the other end. The three cantilevers have different resonant frequencies by designed in different thicknesses. It is obviously that a similar advantage of multiple resonant frequencies as piezoelectric cantilevers array structure is built. To achieve broadband energy harvesting, magnetic interaction is used to introduce the nonlinear system stiffness to tune the resonant frequency to match the excitation. Since the three cantilever tips are all free and the magnetic force is distance dependent, the resonant frequencies will be complexly changed with the vertical vibration of the free end. Both model and experiment are built. The electromechanically coupled lumped-parameter model is presented. An electromechanical formulation and analytical expressions for the coupled nonlinear vibration response and voltage response are given. The entire structure is fabricated and mechanically attached to a electromagnetic shaker as a vibrating body via the fixed base, in order to couple the vibrations to the cantilever. The cantilevers are bonded with piezoelectric macro-fiber composite (MFC) materials (Model: M8514P2). The size of the cantilevers is 120*20mm2 and the thicknesses are separately 1mm, 0.8mm, 0.6mm. The prototype generator has a measured performance of 160.98 mW effective electrical power and 7.93 DC output voltage via the excitation level of 10m/s2. The 130% increase in the operating bandwidth is achieved. This device is promising to support low-power devices, peer-to-peer wireless nodes, and small-scale wireless sensor networks in ambient vibration environment.Keywords: tri-cantilever, ambient vibration, energy harvesting, magnetic oscillator
Procedia PDF Downloads 154414 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter
Procedia PDF Downloads 330413 Model Reference Adaptive Approach for Power System Stabilizer for Damping of Power Oscillations
Authors: Jožef Ritonja, Bojan Grčar, Boštjan Polajžer
Abstract:
In recent years, electricity trade between neighboring countries has become increasingly intense. Increasing power transmission over long distances has resulted in an increase in the oscillations of the transmitted power. The damping of the oscillations can be carried out with the reconfiguration of the network or the replacement of generators, but such solution is not economically reasonable. The only cost-effective solution to improve the damping of power oscillations is to use power system stabilizers. Power system stabilizer represents a part of synchronous generator control system. It utilizes semiconductor’s excitation system connected to the rotor field excitation winding to increase the damping of the power system. The majority of the synchronous generators are equipped with the conventional power system stabilizers with fixed parameters. The control structure of the conventional power system stabilizers and the tuning procedure are based on the linear control theory. Conventional power system stabilizers are simple to realize, but they show non-sufficient damping improvement in the entire operating conditions. This is the reason that advanced control theories are used for development of better power system stabilizers. In this paper, the adaptive control theory for power system stabilizers design and synthesis is studied. The presented work is focused on the use of model reference adaptive control approach. Control signal, which assures that the controlled plant output will follow the reference model output, is generated by the adaptive algorithm. Adaptive gains are obtained as a combination of the "proportional" term and with the σ-term extended "integral" term. The σ-term is introduced to avoid divergence of the integral gains. The necessary condition for asymptotic tracking is derived by means of hyperstability theory. The benefits of the proposed model reference adaptive power system stabilizer were evaluated as objectively as possible by means of a theoretical analysis, numerical simulations and laboratory realizations. Damping of the synchronous generator oscillations in the entire operating range was investigated. Obtained results show the improved damping in the entire operating area and the increase of the power system stability. The results of the presented work will help by the development of the model reference power system stabilizer which should be able to replace the conventional stabilizers in power systems.Keywords: power system, stability, oscillations, power system stabilizer, model reference adaptive control
Procedia PDF Downloads 138412 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation
Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim
Abstract:
In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement
Procedia PDF Downloads 117411 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts
Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig
Abstract:
This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.Keywords: expert interview, hazard management, modeling, simulation, snow avalanche
Procedia PDF Downloads 326410 Application of Carbon Nanotubes as Cathodic Corrosion Protection of Steel Reinforcement
Authors: M. F. Perez, Ysmael Verde, B. Escobar, R. Barbosa, J. C. Cruz
Abstract:
Reinforced concrete is one of the most important materials in the construction industry. However, in recent years the durability of concrete structures has been a worrying problem, mainly due to corrosion of reinforcing steel; the consequences of corrosion in all cases lead to shortening of the life of the structure and decrease in quality of service. Since the emergence of this problem, they have implemented different methods or techniques to reduce damage by corrosion of reinforcing steel in concrete structures; as the use of polymeric materials as coatings for the steel rod, spiked inhibitors of concrete during mixing, among others, presenting different limitations in the application of these methods. Because of this, it has been used a method that has proved effective, cathodic protection. That is why due to the properties attributed to carbon nanotubes (CNT), these could act as cathodic corrosion protection. Mounting a three-electrode electrochemical cell, carbon steel as working electrode, saturated calomel electrode (SCE) as the reference electrode, and a graphite rod as a counter electrode to close the system is performed. Samples made were subjected to a cycling process in order to compare the results in the corrosion performance of a coating composed of CNT and the others based on an anticorrosive commercial painting. The samples were tested at room temperature using an electrolyte consisting NaCl and NaOH simulating the typical pH of concrete, ranging from 12.6 to 13.9. Three test samples were made of steel rod, white, with commercial anticorrosive paint and CNT based coating; delimiting the work area to a section of 0.71 cm2. Tests cyclic voltammetry and linear voltammetry electrochemical spectroscopy each impedance of the three samples were made with a window of potential vs SCE 0.7 -1.7 a scan rate of 50 mV / s and 100 mV / s. The impedance values were obtained by applying a sine wave of amplitude 50 mV in a frequency range of 100 kHz to 100 MHz. The results obtained in this study show that the CNT based coating applied to the steel rod considerably decreased the corrosion rate compared to the commercial coating of anticorrosive paint, because the Ecorr was passed increase as the cycling process. The samples tested in all three cases were observed by light microscopy throughout the cycling process and micrographic analysis was performed using scanning electron microscopy (SEM). Results from electrochemical measurements show that the application of the coating containing carbon nanotubes on the surface of the steel rod greatly increases the corrosion resistance, compared to commercial anticorrosive coating.Keywords: anticorrosive, carbon nanotubes, corrosion, steel
Procedia PDF Downloads 477409 Implementation Of Evidence Based Nursing Practice And Associated Factors Among Nurses Working In Jimma Zone Public Hospitals, Southwest Ethiopia
Authors: Dawit Hoyiso, Abinet Arega, Terefe Markos
Abstract:
Background: - In spite of all the various programs and strategies to promote the use of research finding there is still gap between theory and practice. Difference in outcomes, health inequalities, and poorly performing health service continue to present a challenge to all nurses. A number of studies from various countries have reported that nurses’ experience of evidence-based practice is low. In Ethiopia there is an information gap on the extent of evidence based nursing practice and its associated factors. Objective: - the study aims to assess the implementation of evidence based nursing practice and associated factors among nurses in Jimma zone public hospitals. Method: - Institution based cross-sectional study was conducted from March 1-30/2015. A total of 333 sampled nurses for quantitative and 8 in-depth interview of key informants were involved in the study. Semi-structured questionnaire was adapted from funk’s BARRIER scale and Friedman’s test. Multivariable Linear regression was used to determine significance of association between dependent and independent variables. Pretest was done on 17 nurses of Bedele hospital. Ethical issue was secured. Result:-Of 333 distributed questionnaires 302 were completed, giving 90.6% response rate. Of 302 participants 245 were involved in EBP activities to different level (from seldom to often). About forty five(18.4%) of the respondents had implemented evidence based practice to low level (sometimes), one hundred three (42 %) of respondents had implemented evidence based practice to medium level and ninety seven (39.6 %) of respondents had implemented evidence based practice to high level(often). The first greatest perceived barrier was setting characteristic (mean score=26.60±7.08). Knowledge about research evidence was positively associated with implementation of evidence based nursing practice (β=0.76, P=0.008). Similarly, Place where the respondent graduated was positively associated with implementation of evidence based nursing practice (β=2.270, P=0.047). Also availability of information resources was positively associated with implementation of evidence based practice (β=0.67, P= 0.006). Conclusion: -Even though larger portion of nurses in this study were involved in evidence-based practice whereas small number of participants had implemented frequently. Evidence-based nursing practice was positively associated with knowledge of research, place where respondents graduated, and the availability of information resources. Organizational factors were found to be the greatest perceived barrier. Intervention programs on awareness creation, training, resource provision, and curriculum issues to improve implementation of evidence based nursing practice by stakeholders are recommended.Keywords: evidence based practice, nursing practice, research utilization, Ethiopia
Procedia PDF Downloads 95408 Riverine Urban Heritage: A Basis for Green Infrastructure
Authors: Ioanna H. Lioliou, Despoina D. Zavraka
Abstract:
The radical reformation that Greek urban space, has undergone over the last century, due to the socio-historical developments, technological development and political–geographic factors, has left its imprint on the urban landscape. While the big cities struggle to regain urban landscape balance, small towns are considered to offer high quality lifescapes, ensuring sustainable development potential. However, their unplanned urbanization process led to the loss of significant areas of nature, lack of essential infrastructure, chaotic built environment, incompatible land uses and urban cohesiveness. Natural environment reference points, such as springs, streams, rivers, forests, suburban greenbelts, and etc.; seems to be detached from urban space, while the public, open and green spaces, unequally distributed in the built environment, they are no longer able to offer a complete experience of nature in the city. This study focuses on Greek mainland, a small town Elassona, and aims to restore spatial coherence between the city’s homonymous river and its urban space surroundings. The existence of a linear aquatic ecosystem, is considered a precious greenway, also referred as blueway, able to initiate natural penetrations and ecosystems empowering. The integration of disconnected natural ecosystems forms the basis of a strategic intervention scheme, where the river becomes the urban integration tool / feature, constituting the main urban corridor and an indispensible part of a wider green network that connects open and green spaces, ensuring the function of all the established networks (transportation, commercial, social) of the town. The proposed intervention, introduces a green network highlighting the old stone bridge at the ‘entrance’ of the river in the town and expanding throughout the town with strategic uses and activities, providing accessibility for all the users. The methodology used, is based on the collection of design tools used in related urban river-design interventions around the world. The reinstallation/reactivation of the balance between natural and urban landscape, besides the environmental benefits, contributes decisively to the illustration/projection of urban green identity and re-enhancement of the quality of lifescape qualities and social interaction.Keywords: green network, rehabilitation scheme, urban landscape, urban streams
Procedia PDF Downloads 280407 Transport of Inertial Finite-Size Floating Plastic Pollution by Ocean Surface Waves
Authors: Ross Calvert, Colin Whittaker, Alison Raby, Alistair G. L. Borthwick, Ton S. van den Bremer
Abstract:
Large concentrations of plastic have polluted the seas in the last half century, with harmful effects on marine wildlife and potentially to human health. Plastic pollution will have lasting effects because it is expected to take hundreds or thousands of years for plastic to decay in the ocean. The question arises how waves transport plastic in the ocean. The predominant motion induced by waves creates ellipsoid orbits. However, these orbits do not close, resulting in a drift. This is defined as Stokes drift. If a particle is infinitesimally small and the same density as water, it will behave exactly as the water does, i.e., as a purely Lagrangian tracer. However, as the particle grows in size or changes density, it will behave differently. The particle will then have its own inertia, the fluid will exert drag on the particle, because there is relative velocity, and it will rise or sink depending on the density and whether it is on the free surface. Previously, plastic pollution has all been considered to be purely Lagrangian. However, the steepness of waves in the ocean is small, normally about α = k₀a = 0.1 (where k₀ is the wavenumber and a is the wave amplitude), this means that the mean drift flows are of the order of ten times smaller than the oscillatory velocities (Stokes drift is proportional to steepness squared, whilst the oscillatory velocities are proportional to the steepness). Thus, the particle motion must have the forces of the full motion, oscillatory and mean flow, as well as a dynamic buoyancy term to account for the free surface, to determine whether inertia is important. To track the motion of a floating inertial particle under wave action requires the fluid velocities, which form the forcing, and the full equations of motion of a particle to be solved. Starting with the equation of motion of a sphere in unsteady flow with viscous drag. Terms can added then be added to the equation of motion to better model floating plastic: a dynamic buoyancy to model a particle floating on the free surface, quadratic drag for larger particles and a slope sliding term. Using perturbation methods to order the equation of motion into sequentially solvable parts allows a parametric equation for the transport of inertial finite-sized floating particles to be derived. This parametric equation can then be validated using numerical simulations of the equation of motion and flume experiments. This paper presents a parametric equation for the transport of inertial floating finite-size particles by ocean waves. The equation shows an increase in Stokes drift for larger, less dense particles. The equation has been validated using numerical solutions of the equation of motion and laboratory flume experiments. The difference in the particle transport equation and a purely Lagrangian tracer is illustrated using worlds maps of the induced transport. This parametric transport equation would allow ocean-scale numerical models to include inertial effects of floating plastic when predicting or tracing the transport of pollutants.Keywords: perturbation methods, plastic pollution transport, Stokes drift, wave flume experiments, wave-induced mean flow
Procedia PDF Downloads 121406 Model-Based Diagnostics of Multiple Tooth Cracks in Spur Gears
Authors: Ahmed Saeed Mohamed, Sadok Sassi, Mohammad Roshun Paurobally
Abstract:
Gears are important machine components that are widely used to transmit power and change speed in many rotating machines. Any breakdown of these vital components may cause severe disturbance to production and incur heavy financial losses. One of the most common causes of gear failure is the tooth fatigue crack. Early detection of teeth cracks is still a challenging task for engineers and maintenance personnel. So far, to analyze the vibration behavior of gears, different approaches have been tried based on theoretical developments, numerical simulations, or experimental investigations. The objective of this study was to develop a numerical model that could be used to simulate the effect of teeth cracks on the resulting vibrations and hence to permit early fault detection for gear transmission systems. Unlike the majority of published papers, where only one single crack has been considered, this work is more realistic, since it incorporates the possibility of multiple simultaneous cracks with different lengths. As cracks significantly alter the gear mesh stiffness, we performed a finite element analysis using SolidWorks software to determine the stiffness variation with respect to the angular position for different combinations of crack lengths. A simplified six degrees of freedom non-linear lumped parameter model of a one-stage gear system is proposed to study the vibration of a pair of spur gears, with and without tooth cracks. The model takes several physical properties into account, including variable gear mesh stiffness and the effect of friction, but ignores the lubrication effect. The vibration simulation results of the gearbox were obtained via Matlab and Simulink. The results were found to be consistent with the results from previously published works. The effect of one crack with different levels was studied and very similar changes in the total mesh stiffness and the vibration response, both were observed and compared to what has been found in previous studies. The effect of the crack length on various statistical time domain parameters was considered and the results show that these parameters were not equally sensitive to the crack percentage. Multiple cracks are introduced at different locations and the vibration response and the statistical parameters were obtained.Keywords: dynamic simulation, gear mesh stiffness, simultaneous tooth cracks, spur gear, vibration-based fault detection
Procedia PDF Downloads 211405 The Effect of Seated Distance on Muscle Activation and Joint Kinematics during Seated Strengthening in Patients with Stroke with Extensor Synergy Pattern in the Lower Limbs
Authors: Y. H. Chen, P. Y. Chiang, T. Sugiarto, I. Karsuna, Y. J. Lin, C. C. Chang, W. C. Hsu
Abstract:
Task-specific training with intense practice of functional tasks has been emphasized for the approaches in motor rehabilitation in patients with hemiplegic strokes. Although reciprocal actions which may increase demands on motor control during seated stepping exercise, motor control is not explicitly trained with emphasis and instruction focused on traditional strengthening. Apart from cycling and treadmill, various forms of seated exerciser are becoming available for the lower extremity exercise. The benefit of seated exerciser has been focused on the effect on the cardiopulmonary system. Thus, the aim of current study is to investigate the effect of seated distance on muscle activation during seated strengthening in patients with stroke with extensor synergy pattern in the lower extremities. Electrodes were placed on the surface of lower limbs muscles, including rectus femoris (RF), vastus lateralis (VL), biceps femoris (BF) and gastrocnemius (GT) of both sides. Maximal voluntary contraction (MVC) of the muscles were obtained to normalize the EMG amplitude obtained during dynamic trials with analog raw data digitized with a sampling frequency of 2000 Hz, fully rectified and the linear enveloped. Movement cycle was separated into two phases by pushing (PP) and Return (RP). Integral EMG (iEMG) is then used to quantify level of activation during each of the phases. Subjects performed strengthening with moderate resistance with speed of 60 rpm in two different distances (D1, short) and (D2, long). The results showed greater iEMG in RF and smaller iEMG in VL and BF with obvious increase range of motion of hip flexion in D1 condition. On the contrary, no significant involvement of RF while greater level of muscular activation in VL and BF during RP was found during PP in D2 condition. In addition, greater hip internal rotation was observed in D2 condition. In patients with stroke with abnormal tone revealed by extensor synergy in the lower extremities, shorter seated distance is suggested to facilitate hip flexor muscle activation while avoid inducing hyper extensor tone which may prevent a smooth repetitive motion. Repetitive muscular contraction exercise of hip flexor may be helpful for further gait training as it may assist hip flexion during swing phase of the walking.Keywords: seated strengthening, patients with stroke, electromyography, synergy pattern
Procedia PDF Downloads 214404 The Role of Metaheuristic Approaches in Engineering Problems
Authors: Ferzat Anka
Abstract:
Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems
Procedia PDF Downloads 77403 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 54402 Influence of High-Resolution Satellites Attitude Parameters on Image Quality
Authors: Walid Wahballah, Taher Bazan, Fawzy Eltohamy
Abstract:
One of the important functions of the satellite attitude control system is to provide the required pointing accuracy and attitude stability for optical remote sensing satellites to achieve good image quality. Although offering noise reduction and increased sensitivity, time delay and integration (TDI) charge coupled devices (CCDs) utilized in high-resolution satellites (HRS) are prone to introduce large amounts of pixel smear due to the instability of the line of sight. During on-orbit imaging, as a result of the Earth’s rotation and the satellite platform instability, the moving direction of the TDI-CCD linear array and the imaging direction of the camera become different. The speed of the image moving on the image plane (focal plane) represents the image motion velocity whereas the angle between the two directions is known as the drift angle (β). The drift angle occurs due to the rotation of the earth around its axis during satellite imaging; affecting the geometric accuracy and, consequently, causing image quality degradation. Therefore, the image motion velocity vector and the drift angle are two important factors used in the assessment of the image quality of TDI-CCD based optical remote sensing satellites. A model for estimating the image motion velocity and the drift angle in HRS is derived. The six satellite attitude control parameters represented in the derived model are the (roll angle φ, pitch angle θ, yaw angle ψ, roll angular velocity φ֗, pitch angular velocity θ֗ and yaw angular velocity ψ֗ ). The influence of these attitude parameters on the image quality is analyzed by establishing a relationship between the image motion velocity vector, drift angle and the six satellite attitude parameters. The influence of the satellite attitude parameters on the image quality is assessed by the presented model in terms of modulation transfer function (MTF) in both cross- and along-track directions. Three different cases representing the effect of pointing accuracy (φ, θ, ψ) bias are considered using four different sets of pointing accuracy typical values, while the satellite attitude stability parameters are ideal. In the same manner, the influence of satellite attitude stability (φ֗, θ֗, ψ֗) on image quality is also analysed for ideal pointing accuracy parameters. The results reveal that cross-track image quality is influenced seriously by the yaw angle bias and the roll angular velocity bias, while along-track image quality is influenced only by the pitch angular velocity bias.Keywords: high-resolution satellites, pointing accuracy, attitude stability, TDI-CCD, smear, MTF
Procedia PDF Downloads 402401 The Impact of the Variation of Sky View Factor on Landscape Degree of Enclosure of Urban Blue and Green Belt
Authors: Yi-Chun Huang, Kuan-Yun Chen, Chuang-Hung Lin
Abstract:
Urban Green Belt and Blue is a part of the city landscape, it is an important constituent element of the urban environment and appearance. The Hsinchu East Gate Moat is situated in the center of the city, which not only has a wealth of historical and cultural resources, but also combines the Green Belt and the Blue Belt qualities at the same time. The Moat runs more than a thousand meters through the vital Green Belt and the Blue Belt in downtown, and each section is presented in different qualities of moat from south to north. The water area and the green belt of surroundings are presented linear and banded spread. The water body and the rich diverse river banks form an urban green belt of rich layers. The watercourse with green belt design lets users have connections with blue belts in different ways; therefore, the integration of Hsinchu East Gate and moat have become one of the unique urban landscapes in Taiwan. The study is based on the fact-finding case of Hsinchu East Gate Moat where situated in northern Taiwan, to research the impact between the SVF variation of the city and spatial sequence of Urban Green Belt and Blue landscape and visual analysis by constituent cross-section, and then comparing the influence of different leaf area index – the variable ecological factors to the degree of enclosure. We proceed to survey the landscape design of open space, to measure existing structural features of the plant canopy which contain the height of plants and branches, the crown diameter, breast-height diameter through access to diagram of Geographic Information Systems (GIS) and on-the-spot actual measurement. The north and south districts of blue green belt areas are divided 20 meters into a unit from East Gate Roundabout as the epicenter, and to set up a survey points to measure the SVF above the survey points; then we proceed to quantitative analysis from the data to calculate open landscape degree of enclosure. The results can be reference for the composition of future river landscape and the practical operation for dynamic space planning of blue and green belt landscape.Keywords: sky view factor, degree of enclosure, spatial sequence, leaf area indices
Procedia PDF Downloads 556400 Approach for the Mathematical Calculation of the Damping Factor of Railway Bridges with Ballasted Track
Authors: Andreas Stollwitzer, Lara Bettinelli, Josef Fink
Abstract:
The expansion of the high-speed rail network over the past decades has resulted in new challenges for engineers, including traffic-induced resonance vibrations of railway bridges. Excessive resonance-induced speed-dependent accelerations of railway bridges during high-speed traffic can lead to negative consequences such as fatigue symptoms, distortion of the track, destabilisation of the ballast bed, and potentially even derailment. A realistic prognosis of bridge vibrations during high-speed traffic must not only rely on the right choice of an adequate calculation model for both bridge and train but first and foremost on the use of dynamic model parameters which reflect reality appropriately. However, comparisons between measured and calculated bridge vibrations are often characterised by considerable discrepancies, whereas dynamic calculations overestimate the actual responses and therefore lead to uneconomical results. This gap between measurement and calculation constitutes a complex research issue and can be traced to several causes. One major cause is found in the dynamic properties of the ballasted track, more specifically in the persisting, substantial uncertainties regarding the consideration of the ballasted track (mechanical model and input parameters) in dynamic calculations. Furthermore, the discrepancy is particularly pronounced concerning the damping values of the bridge, as conservative values have to be used in the calculations due to normative specifications and lack of knowledge. By using a large-scale test facility, the analysis of the dynamic behaviour of ballasted track has been a major research topic at the Institute of Structural Engineering/Steel Construction at TU Wien in recent years. This highly specialised test facility is designed for isolated research of the ballasted track's dynamic stiffness and damping properties – independent of the bearing structure. Several mechanical models for the ballasted track consisting of one or more continuous spring-damper elements were developed based on the knowledge gained. These mechanical models can subsequently be integrated into bridge models for dynamic calculations. Furthermore, based on measurements at the test facility, model-dependent stiffness and damping parameters were determined for these mechanical models. As a result, realistic mechanical models of the railway bridge with different levels of detail and sufficiently precise characteristic values are available for bridge engineers. Besides that, this contribution also presents another practical application of such a bridge model: Based on the bridge model, determination equations for the damping factor (as Lehr's damping factor) can be derived. This approach constitutes a first-time method that makes the damping factor of a railway bridge calculable. A comparison of this mathematical approach with measured dynamic parameters of existing railway bridges illustrates, on the one hand, the apparent deviation between normatively prescribed and in-situ measured damping factors. On the other hand, it is also shown that a new approach, which makes it possible to calculate the damping factor, provides results that are close to reality and thus raises potentials for minimising the discrepancy between measurement and calculation.Keywords: ballasted track, bridge dynamics, damping, model design, railway bridges
Procedia PDF Downloads 164399 Enhancement Effect of Superparamagnetic Iron Oxide Nanoparticle-Based MRI Contrast Agent at Different Concentrations and Magnetic Field Strengths
Authors: Bimali Sanjeevani Weerakoon, Toshiaki Osuga, Takehisa Konishi
Abstract:
Magnetic Resonance Imaging Contrast Agents (MRI-CM) are significant in the clinical and biological imaging as they have the ability to alter the normal tissue contrast, thereby affecting the signal intensity to enhance the visibility and detectability of images. Superparamagnetic Iron Oxide (SPIO) nanoparticles, coated with dextran or carboxydextran are currently available for clinical MR imaging of the liver. Most SPIO contrast agents are T2 shortening agents and Resovist (Ferucarbotran) is one of a clinically tested, organ-specific, SPIO agent which has a low molecular carboxydextran coating. The enhancement effect of Resovist depends on its relaxivity which in turn depends on factors like magnetic field strength, concentrations, nanoparticle properties, pH and temperature. Therefore, this study was conducted to investigate the impact of field strength and different contrast concentrations on enhancement effects of Resovist. The study explored the MRI signal intensity of Resovist in the physiological range of plasma from T2-weighted spin echo sequence at three magnetic field strengths: 0.47 T (r1=15, r2=101), 1.5 T (r1=7.4, r2=95), and 3 T (r1=3.3, r2=160) and the range of contrast concentrations by a mathematical simulation. Relaxivities of r1 and r2 (L mmol-1 Sec-1) were obtained from a previous study and the selected concentrations were 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 2.0, and 3.0 mmol/L. T2-weighted images were simulated using TR/TE ratio as 2000 ms /100 ms. According to the reference literature, with increasing magnetic field strengths, the r1 relaxivity tends to decrease while the r2 did not show any systematic relationship with the selected field strengths. In parallel, this study results revealed that the signal intensity of Resovist at lower concentrations tends to increase than the higher concentrations. The highest reported signal intensity was observed in the low field strength of 0.47 T. The maximum signal intensities for 0.47 T, 1.5 T and 3 T were found at the concentration levels of 0.05, 0.06 and 0.05 mmol/L, respectively. Furthermore, it was revealed that, the concentrations higher than the above, the signal intensity was decreased exponentially. An inverse relationship can be found between the field strength and T2 relaxation time, whereas, the field strength was increased, T2 relaxation time was decreased accordingly. However, resulted T2 relaxation time was not significantly different between 0.47 T and 1.5 T in this study. Moreover, a linear correlation of transverse relaxation rates (1/T2, s–1) with the concentrations of Resovist can be observed. According to these results, it can conclude that the concentration of SPIO nanoparticle contrast agents and the field strengths of MRI are two important parameters which can affect the signal intensity of T2-weighted SE sequence. Therefore, when MR imaging those two parameters should be considered prudently.Keywords: Concentration, resovist, field strength, relaxivity, signal intensity
Procedia PDF Downloads 352398 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 39397 Walking Cadence to Attain a Minimum of Moderate Aerobic Intensity in People at Risk of Cardiovascular Diseases
Authors: Fagner O. Serrano, Danielle R. Bouchard, Todd A. Duhame
Abstract:
Walking cadence (steps/min) is an effective way to prescribe exercise so an individual can reach a moderate intensity, which is recommended to optimize health benefits. To our knowledge, there is no study on the required walking cadence to reach a moderate intensity for people that present chronic conditions or risk factors for chronic conditions such as Cardiovascular Diseases (CVD). The objectives of this study were: 1- to identify the walking cadence needed for people at risk of CVD to a reach moderate intensity, and 2- to develop and test an equation using clinical variables to help professionals working with individuals at risk of CVD to estimate the walking cadence needed to reach moderate intensity. Ninety-one people presenting a minimum of two risk factors for CVD completed a medically supervised graded exercise test to assess maximum oxygen consumption at the first visit. The last visit consisted of recording walking cadence using a foot pod Garmin FR-60 and a Polar heart rate monitor, aiming to get participants to reach 40% of their maximal oxygen consumption using a portable metabolic cart on an indoor flat surface. The equation to predict the walking cadence needed to reach moderate intensity in this sample was developed as follows: The sample was randomly split in half and the equation was developed with one half of the participants, and validated using the other half. Body mass index, height, stride length, leg height, body weight, fitness level (VO2max), and self-selected cadence (over 200 meters) were measured using objective measured. Mean walking cadence to reach moderate intensity for people age 64.3 ± 10.3 years old at risk of CVD was 115.8 10.3 steps per minute. Body mass index, height, body weight, fitness level, and self-selected cadence were associated with walking cadence at moderate intensity when evaluated in bivariate analyses (r ranging from 0.22 to 0.52; all P values ≤0.05). Using linear regression analysis including all clinical variables associated in the bivariate analyses, body weight was the significant predictor of walking cadence for reaching a moderate intensity (ß=0.24; P=.018) explaining 13% of walking cadence to reach moderate intensity. The regression model created was Y = 134.4-0.24 X body weight (kg).Our findings suggest that people presenting two or more risk factors for CVD are reaching moderate intensity while walking at a cadence above the one officially recommended (116 steps per minute vs. 100 steps per minute) for healthy adults.Keywords: cardiovascular disease, moderate intensity, older adults, walking cadence
Procedia PDF Downloads 443396 Diet and Exercise Intervention and Bio–Atherogenic Markers for Obesity Classes of Black South Africans with Type 2 Diabetes Mellitus Using Discriminant Analysis
Authors: Oladele V. Adeniyi, B. Longo-Mbenza, Daniel T. Goon
Abstract:
Background: Lipids are often low or in the normal ranges and controversial in the atherogenesis among Black Africans. The effect of the severity of obesity on some traditional and novel cardiovascular disease risk factors is unclear before and after a diet and exercise maintenance programme among obese black South Africans with type 2 diabetes mellitus (T2DM). Therefore, this study aimed to identify the risk factors to discriminate obesity classes among patients with T2DM before and after a diet and exercise programme. Methods: This interventional cohort of Black South Africans with T2DM was followed by a very – low calorie diet and exercise programme in Mthatha, between August and November 2013. Gender, age, and the levels of body mass index (BMI), blood pressure, monthly income, daily frequency of meals, blood random plasma glucose (RPG), serum creatinine, total cholesterol (TC), triglycerides (TG), LDL –C, HDL – C, Non-HDL, ratios of TC/HDL, TG/HDL, and LDL/HDL were recorded. Univariate analysis (ANOVA) and multivariate discriminant analysis were performed to separate obesity classes: normal weight (BMI = 18.5 – 24.9 kg/m2), overweight (BMI = 25 – 29.9 kg/m2), obesity Class 1 (BMI = 30 – 34.9 kg/m2), obesity Class 2 (BMI = 35 – 39.9 kg/m2), and obesity Class 3 (BMI ≥ 40 kg/m2). Results: At the baseline (1st Month September), all 327 patients were overweight/obese: 19.6% overweight, 42.8% obese class 1, 22.3% obese class 2, and 15.3% obese class 3. In discriminant analysis, only systolic blood pressure (SBP with positive association) and LDL/HDL ratio (negative association) significantly separated increasing obesity classes. At the post – evaluation (3rd Month November), out of all 327 patients, 19.9%, 19.3%, 37.6%, 15%, and 8.3% had normal weight, overweight, obesity class 1, obesity class 2, and obesity class 3, respectively. There was a significant negative association between serum creatinine and increase in BMI. In discriminant analysis, only age (positive association), SBP (U – shaped relationship), monthly income (inverted U – shaped association), daily frequency of meals (positive association), and LDL/HDL ratio (positive association) classified significantly increasing obesity classes. Conclusion: There is an epidemic of diabesity (Obesity + T2DM) in this Black South Africans with some weight loss. Further studies are needed to understand positive or negative linear correlations and paradoxical curvilinear correlations between these markers and increase in BMI among black South African T2DM patients.Keywords: atherogenic dyslipidaemia, dietary interventions, obesity, south africans
Procedia PDF Downloads 367395 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures
Authors: Rui Teixeira, Alan O’Connor, Maria Nogal
Abstract:
The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data
Procedia PDF Downloads 272394 Comparison of Developed Statokinesigram and Marker Data Signals by Model Approach
Authors: Boris Barbolyas, Kristina Buckova, Tomas Volensky, Cyril Belavy, Ladislav Dedik
Abstract:
Background: Based on statokinezigram, the human balance control is often studied. Approach to human postural reaction analysis is based on a combination of stabilometry output signal with retroreflective marker data signal processing, analysis, and understanding, in this study. The study shows another original application of Method of Developed Statokinesigram Trajectory (MDST), too. Methods: In this study, the participants maintained quiet bipedal standing for 10 s on stabilometry platform. Consequently, bilateral vibration stimuli to Achilles tendons in 20 s interval was applied. Vibration stimuli caused that human postural system took the new pseudo-steady state. Vibration frequencies were 20, 60 and 80 Hz. Participant's body segments - head, shoulders, hips, knees, ankles and little fingers were marked by 12 retroreflective markers. Markers positions were scanned by six cameras system BTS SMART DX. Registration of their postural reaction lasted 60 s. Sampling frequency was 100 Hz. For measured data processing were used Method of Developed Statokinesigram Trajectory. Regression analysis of developed statokinesigram trajectory (DST) data and retroreflective marker developed trajectory (DMT) data were used to find out which marker trajectories most correlate with stabilometry platform output signals. Scaling coefficients (λ) between DST and DMT by linear regression analysis were evaluated, too. Results: Scaling coefficients for marker trajectories were identified for all body segments. Head markers trajectories reached maximal value and ankle markers trajectories had a minimal value of scaling coefficient. Hips, knees and ankles markers were approximately symmetrical in the meaning of scaling coefficient. Notable differences of scaling coefficient were detected in head and shoulders markers trajectories which were not symmetrical. The model of postural system behavior was identified by MDST. Conclusion: Value of scaling factor identifies which body segment is predisposed to postural instability. Hypothetically, if statokinesigram represents overall human postural system response to vibration stimuli, then markers data represented particular postural responses. It can be assumed that cumulative sum of particular marker postural responses is equal to statokinesigram.Keywords: center of pressure (CoP), method of developed statokinesigram trajectory (MDST), model of postural system behavior, retroreflective marker data
Procedia PDF Downloads 350393 Investigations into the in situ Enterococcus faecalis Biofilm Removal Efficacies of Passive and Active Sodium Hypochlorite Irrigant Delivered into Lateral Canal of a Simulated Root Canal Model
Authors: Saifalarab A. Mohmmed, Morgana E. Vianna, Jonathan C. Knowles
Abstract:
The issue of apical periodontitis has received considerable critical attention. Bacteria is integrated into communities, attached to surfaces and consequently form biofilm. The biofilm structure provides bacteria with a series protection skills against, antimicrobial agents and enhances pathogenicity (e.g. apical periodontitis). Sodium hypochlorite (NaOCl) has become the irrigant of choice for elimination of bacteria from the root canal system based on its antimicrobial findings. The aim of the study was to investigate the effect of different agitation techniques on the efficacy of 2.5% NaOCl to eliminate the biofilm from the surface of the lateral canal using the residual biofilm, and removal rate of biofilm as outcome measures. The effect of canal complexity (lateral canal) on the efficacy of the irrigation procedure was also assessed. Forty root canal models (n = 10 per group) were manufactured using 3D printing and resin materials. Each model consisted of two halves of an 18 mm length root canal with apical size 30 and taper 0.06, and a lateral canal of 3 mm length, 0.3 mm diameter located at 3 mm from the apical terminus. E. faecalis biofilms were grown on the apical 3 mm and lateral canal of the models for 10 days in Brain Heart Infusion broth. Biofilms were stained using crystal violet for visualisation. The model halves were reassembled, attached to an apparatus and tested under a fluorescence microscope. Syringe and needle irrigation protocol was performed using 9 mL of 2.5% NaOCl irrigant for 60 seconds. The irrigant was either left stagnant in the canal or activated for 30 seconds using manual (gutta-percha), sonic and ultrasonic methods. Images were then captured every second using an external camera. The percentages of residual biofilm were measured using image analysis software. The data were analysed using generalised linear mixed models. The greatest removal was associated with the ultrasonic group (66.76%) followed by sonic (45.49%), manual (43.97%), and passive irrigation group (control) (38.67%) respectively. No marked reduction in the efficiency of NaOCl to remove biofilm was found between the simple and complex anatomy models (p = 0.098). The removal efficacy of NaOCl on the biofilm was limited to the 1 mm level of the lateral canal. The agitation of NaOCl results in better penetration of the irrigant into the lateral canals. Ultrasonic agitation of NaOCl improved the removal of bacterial biofilm.Keywords: 3D printing, biofilm, root canal irrigation, sodium hypochlorite
Procedia PDF Downloads 229392 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada
Authors: Bilel Chalghaf, Mathieu Varin
Abstract:
Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR
Procedia PDF Downloads 134391 Numerical Model of Crude Glycerol Autothermal Reforming to Hydrogen-Rich Syngas
Authors: A. Odoom, A. Salama, H. Ibrahim
Abstract:
Hydrogen is a clean source of energy for power production and transportation. The main source of hydrogen in this research is biodiesel. Glycerol also called glycerine is a by-product of biodiesel production by transesterification of vegetable oils and methanol. This is a reliable and environmentally-friendly source of hydrogen production than fossil fuels. A typical composition of crude glycerol comprises of glycerol, water, organic and inorganic salts, soap, methanol and small amounts of glycerides. Crude glycerol has limited industrial application due to its low purity thus, the usage of crude glycerol can significantly enhance the sustainability and production of biodiesel. Reforming techniques is an approach for hydrogen production mainly Steam Reforming (SR), Autothermal Reforming (ATR) and Partial Oxidation Reforming (POR). SR produces high hydrogen conversions and yield but is highly endothermic whereas POR is exothermic. On the downside, PO yields lower hydrogen as well as large amount of side reactions. ATR which is a fusion of partial oxidation reforming and steam reforming is thermally neutral because net reactor heat duty is zero. It has relatively high hydrogen yield, selectivity as well as limits coke formation. The complex chemical processes that take place during the production phases makes it relatively difficult to construct a reliable and robust numerical model. Numerical model is a tool to mimic reality and provide insight into the influence of the parameters. In this work, we introduce a finite volume numerical study for an 'in-house' lab-scale experiment of ATR. Previous numerical studies on this process have considered either using Comsol or nodal finite difference analysis. Since Comsol is a commercial package which is not readily available everywhere and lab-scale experiment can be considered well mixed in the radial direction. One spatial dimension suffices to capture the essential feature of ATR, in this work, we consider developing our own numerical approach using MATLAB. A continuum fixed bed reactor is modelled using MATLAB with both pseudo homogeneous and heterogeneous models. The drawback of nodal finite difference formulation is that it is not locally conservative which means that materials and momenta can be generated inside the domain as an artifact of the discretization. Control volume, on the other hand, is locally conservative and suites very well problems where materials are generated and consumed inside the domain. In this work, species mass balance, Darcy’s equation and energy equations are solved using operator splitting technique. Therefore, diffusion-like terms are discretized implicitly while advection-like terms are discretized explicitly. An upwind scheme is adapted for the advection term to ensure accuracy and positivity. Comparisons with the experimental data show very good agreements which build confidence in our modeling approach. The models obtained were validated and optimized for better results.Keywords: autothermal reforming, crude glycerol, hydrogen, numerical model
Procedia PDF Downloads 140390 Pooled Analysis of Three School-Based Obesity Interventions in a Metropolitan Area of Brazil
Authors: Rosely Sichieri, Bruna K. Hassan, Michele Sgambato, Barbara S. N. Souza, Rosangela A. Pereira, Edna M. Yokoo, Diana B. Cunha
Abstract:
Obesity is increasing at a fast rate in low and middle-income countries where few school-based obesity interventions have been conducted. Results of obesity prevention studies are still inconclusive mainly due to underestimation of sample size in cluster-randomized trials and overestimation of changes in body mass index (BMI). The pooled analysis in the present study overcomes these design problems by analyzing 4,448 students (mean age 11.7 years) from three randomized behavioral school-based interventions, conducted in public schools of the metropolitan area of Rio de Janeiro, Brazil. The three studies focused on encouraging students to change their drinking and eating habits over one school year, with monthly 1-h sessions in the classroom. Folders explaining the intervention program and suggesting the participation of the family, such as reducing the purchase of sodas were sent home. Classroom activities were delivered by research assistants in the first two interventions and by the regular teachers in the third one, except for culinary class aimed at developing cooking skills to increase healthy eating choices. The first intervention was conducted in 2005 with 1,140 fourth graders from 22 public schools; the second, with 644 fifth graders from 20 public schools in 2010; and the last one, with 2,743 fifth and sixth graders from 18 public schools in 2016. The result was a non-significant change in BMI after one school year of positive changes in dietary behaviors associated with obesity. Pooled intention-to-treat analysis using linear mixed models was used for the overall and subgroup analysis by BMI status, sex, and race. The estimated mean BMI changes were from 18.93 to 19.22 in the control group and from 18.89 to 19.19 in the intervention group; with a p-value of change over time of 0.94. Control and intervention groups were balanced at baseline. Subgroup analyses were statistically and clinically non-significant, except for the non-overweight/obese group with a 0.05 reduction of BMI comparing the intervention with control. In conclusion, this large pooled analysis showed a very small effect on BMI only in the normal weight students. The results are in line with many of the school-based initiatives that have been promising in relation to modifying behaviors associated with obesity but of no impact on excessive weight gain. Changes in BMI may require great changes in energy balance that are hard to achieve in primary prevention at school level.Keywords: adolescents, obesity prevention, randomized controlled trials, school-based study
Procedia PDF Downloads 160389 Bovine Sperm Capacitation Promoters: The Comparison between Serum and Non-serum Albumin originated from Fish
Authors: Haris Setiawan, Phongsakorn Chuammitri, Korawan Sringarm, Montira Intanon, Anucha Sathanawongs
Abstract:
Capacitation is a prerequisite to achieving sperm competency to penetrate the oocyte naturally occurring in vivo throughout the female reproductive tract and entangling secretory fluid and epithelial cells. One of the crucial compounds in the oviductal fluid which promotes capacitation is albumin, secreted in major concentrations. However, the difficulties in the collection and the inconsistency of the oviductal fluid composition throughout the estrous cycle have replaced its function with serum-based albumins such as bovine serum albumin (BSA). BSA has been primarily involved and evidenced for their stabilizing effect to maintain the acrosome intact during the capacitation process, modulate hyperactivation, and elevate the number of sperm bound to zona pellucida. Contrary to its benefits, the use of blood-derived products in the culture system is not sustainable and increases the risk of disease transmissions, such as Creutzfeldt-Jakob disease (CJD) and bovine spongiform encephalopathy (BSE). Moreover, it has been asserted that this substance is an aeroallergen that produces allergies and respiratory problems. In an effort to identify an alternative sustainable and non-toxic albumin source, the present work evaluated sperm reactions to a capacitation medium containing albumin derived from the flesh of the snakehead fish (Channa striata). Before examining the ability of this non-serum albumin to promote capacitation in bovine sperm, the presence of albumin was detected using bromocresol purple (BCP) at the level of 25% from snakehead fish extract. Following the SDS-PAGE and densitometric analysis, two major bands at 40 kDa and 47 kDa consisting of 57% and 16% of total protein loaded were detected as the potential albumin-related bands. Significant differences were observed in all kinematic parameters upon incubation in the capacitation medium. Moreover, consistently higher values were shown for the kinematic parameters related to hyperactivation, such as amplitude lateral head (ALH), velocity curve linear (VCL), and linearity (LIN) when sperm were treated with 3 mg/mL of snakehead fish albumin among other treatments. Likewise, substantial differences of higher acrosome intact presented in sperm upon incubation with various concentrations of snakehead fish albumin for 90 minutes, indicating that this level of snakehead fish albumin can be used to replace the bovine serum albumin. However, further study is highly required to purify the albumin from snakehead fish extract for more reliable findings.Keywords: capacitation promoter, snakehead fish, non-serum albumin, bovine sperm
Procedia PDF Downloads 112388 Evaluation of Nanoparticle Application to Control Formation Damage in Porous Media: Laboratory and Mathematical Modelling
Authors: Gabriel Malgaresi, Sara Borazjani, Hadi Madani, Pavel Bedrikovetsky
Abstract:
Suspension-Colloidal flow in porous media occurs in numerous engineering fields, such as industrial water treatment, the disposal of industrial wastes into aquifers with the propagation of contaminants and low salinity water injection into petroleum reservoirs. The main effects are particle mobilization and captured by the porous rock, which can cause pore plugging and permeability reduction which is known as formation damage. Various factors such as fluid salinity, pH, temperature, and rock properties affect particle detachment. Formation damage is unfavorable specifically near injection and production wells. One way to control formation damage is pre-treatment of the rock with nanoparticles. Adsorption of nanoparticles on fines and rock surfaces alters zeta-potential of the surfaces and enhances the attachment force between the rock and fine particles. The main objective of this study is to develop a two-stage mathematical model for (1) flow and adsorption of nanoparticles on the rock in the pre-treatment stage and (2) fines migration and permeability reduction during the water production after the pre-treatment. The model accounts for adsorption and desorption of nanoparticles, fines migration, and kinetics of particle capture. The system of equations allows for the exact solution. The non-self-similar wave-interaction problem was solved by the Method of Characteristics. The analytical model is new in two ways: First, it accounts for the specific boundary and initial condition describing the injection of nanoparticle and production from the pre-treated porous media; second, it contains the effect of nanoparticle sorption hysteresis. The derived analytical model contains explicit formulae for the concentration fronts along with pressure drop. The solution is used to determine the optimal injection concentration of nanoparticle to avoid formation damage. The mathematical model was validated via an innovative laboratory program. The laboratory study includes two sets of core-flood experiments: (1) production of water without nanoparticle pre-treatment; (2) pre-treatment of a similar core with nanoparticles followed by water production. Positively-charged Alumina nanoparticles with the average particle size of 100 nm were used for the rock pre-treatment. The core was saturated with the nanoparticles and then flushed with low salinity water; pressure drop across the core and the outlet fine concentration was monitored and used for model validation. The results of the analytical modeling showed a significant reduction in the fine outlet concentration and formation damage. This observation was in great agreement with the results of core-flood data. The exact solution accurately describes fines particle breakthroughs and evaluates the positive effect of nanoparticles in formation damage. We show that the adsorbed concentration of nanoparticle highly affects the permeability of the porous media. For the laboratory case presented, the reduction of permeability after 1 PVI production in the pre-treated scenario is 50% lower than the reference case. The main outcome of this study is to provide a validated mathematical model to evaluate the effect of nanoparticles on formation damage.Keywords: nano-particles, formation damage, permeability, fines migration
Procedia PDF Downloads 621