Search results for: trial wave function.
155 General Formula for Water Surface Profile over Side Weir in the Combined, Trapezoidal and Exponential, Channels
Authors: Abdulrahman Abdulrahman
Abstract:
A side weir is a hydraulic structure set into the side of a channel. This structure is used for water level control in channels, to divert flow from a main channel into a side channel when the water level in the main channel exceeds a specific limit and as storm overflows from urban sewerage system. Computation of water surface over the side weirs is essential to determine the flow rate of the side weir. Analytical solutions for water surface profile along rectangular side weir are available only for the special cases of rectangular and trapezoidal channels considering constant specific energy. In this paper, a rectangular side weir located in a combined (trapezoidal with exponential) channel was considered. Expanding binominal series of integer and fraction powers and the using of reduction formula of cosine function integrals, a general analytical formula was obtained for water surface profile along a side weir in a combined (trapezoidal with exponential) channel. Since triangular, rectangular, trapezoidal and parabolic cross-sections are special cases of the combined cross section, the derived formula, is applicable to triangular, rectangular, trapezoidal cross-sections as analytical solution and semi-analytical solution to parabolic cross-section with maximum relative error smaller than 0.76%. The proposed solution should be a useful engineering tool for the evaluation and design of side weirs in open channel.
Keywords: Analytical solution, combined channel, exponential channel, side weirs, trapezoidal channel, water surface profile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 927154 Study on the Addition of Solar Generating and Energy Storage Units to a Power Distribution System
Authors: T. Costa, D. Narvaez, K. Melo, M. Villalva
Abstract:
Installation of micro-generators based on renewable energy in power distribution system has increased in recent years, with the main renewable sources being solar and wind. Due to the intermittent nature of renewable energy sources, such micro-generators produce time-varying energy which does not correspond at certain times of the day to the peak energy consumption of end users. For this reason, the use of energy storage units next to the grid contributes to the proper leveling of the buses’ voltage level according to Brazilian energy quality standards. In this work, the effect of the addition of a photovoltaic solar generator and a store of energy in the busbar voltages of an electric system is analyzed. The consumption profile is defined as the average hourly use of appliances in a common residence, and the generation profile is defined as a function of the solar irradiation available in a locality. The power summation method is validated with analytical calculation and is used to calculate the modules and angles of the voltages in the buses of an electrical system based on the IEEE standard, at each hour of the day and with defined load and generation profiles. The results show that bus 5 presents the worst voltage level at the power consumption peaks and stabilizes at the appropriate range with the inclusion of the energy storage during the night time period. Solar generator maintains improvement of the voltage level during the period when it receives solar irradiation, having peaks of production during the 12 pm (without exceeding the appropriate maximum levels of tension).
Keywords: Energy storage, power distribution system, solar generator, voltage level.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 829153 Time Series Forecasting Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed length window in the past as an explicit input. In this paper, we study how the performance of predictive models change as a function of different look-back window sizes and different amounts of time to predict into the future. We also consider the performance of the recent attention-based transformer models, which had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), Gated Recurrent Units (GRU), and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the website of University of California, Irvine (UCI), which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Absolute Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.
Keywords: Air quality prediction, deep learning algorithms, time series forecasting, look-back window.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1170152 On the AC-Side Interface Filter in Three-Phase Shunt Active Power Filter Systems
Authors: Mihaela Popescu, Alexandru Bitoleanu, Mircea Dobriceanu
Abstract:
The proper selection of the AC-side passive filter interconnecting the voltage source converter to the power supply is essential to obtain satisfactory performances of an active power filter system. The use of the LCL-type filter has the advantage of eliminating the high frequency switching harmonics in the current injected into the power supply. This paper is mainly focused on analyzing the influence of the interface filter parameters on the active filtering performances. Some design aspects are pointed out. Thus, the design of the AC interface filter starts from transfer functions by imposing the filter performance which refers to the significant current attenuation of the switching harmonics without affecting the harmonics to be compensated. A Matlab/Simulink model of the entire active filtering system including a concrete nonlinear load has been developed to examine the system performances. It is shown that a gamma LC filter could accomplish the attenuation requirement of the current provided by converter. Moreover, the existence of an optimal value of the grid-side inductance which minimizes the total harmonic distortion factor of the power supply current is pointed out. Nevertheless, a small converter-side inductance and a damping resistance in series with the filter capacitance are absolutely needed in order to keep the ripple and oscillations of the current at the converter side within acceptable limits. The effect of change in the LCL-filter parameters is evaluated. It is concluded that good active filtering performances can be achieved with small values of the capacitance and converter-side inductance.Keywords: Active power filter, LCL filter, Matlab/Simulinkmodeling, Passive filters, Transfer function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3020151 Simulation of Dynamic Behavior of Seismic Isolators Using a Parallel Elasto-Plastic Model
Authors: Nicolò Vaiana, Giorgio Serino
Abstract:
In this paper, a one-dimensional (1d) Parallel Elasto- Plastic Model (PEPM), able to simulate the uniaxial dynamic behavior of seismic isolators having a continuously decreasing tangent stiffness with increasing displacement, is presented. The parallel modeling concept is applied to discretize the continuously decreasing tangent stiffness function, thus allowing to simulate the dynamic behavior of seismic isolation bearings by putting linear elastic and nonlinear elastic-perfectly plastic elements in parallel. The mathematical model has been validated by comparing the experimental force-displacement hysteresis loops, obtained testing a helical wire rope isolator and a recycled rubber-fiber reinforced bearing, with those predicted numerically. Good agreement between the simulated and experimental results shows that the proposed model can be an effective numerical tool to predict the forcedisplacement relationship of seismic isolators within relatively large displacements. Compared to the widely used Bouc-Wen model, the proposed one allows to avoid the numerical solution of a first order ordinary nonlinear differential equation for each time step of a nonlinear time history analysis, thus reducing the computation effort, and requires the evaluation of only three model parameters from experimental tests, namely the initial tangent stiffness, the asymptotic tangent stiffness, and a parameter defining the transition from the initial to the asymptotic tangent stiffness.Keywords: Base isolation, earthquake engineering, parallel elasto-plastic model, seismic isolators, softening hysteresis loops.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1042150 Limiting Fiber Extensibility as Parameter for Damage in Venous Wall
Authors: Lukas Horny, Rudolf Zitny, Hynek Chlup, Tomas Adamek, Michal Sara
Abstract:
An inflation–extension test with human vena cava inferior was performed with the aim to fit a material model. The vein was modeled as a thick–walled tube loaded by internal pressure and axial force. The material was assumed to be an incompressible hyperelastic fiber reinforced continuum. Fibers are supposed to be arranged in two families of anti–symmetric helices. Considered anisotropy corresponds to local orthotropy. Used strain energy density function was based on a concept of limiting strain extensibility. The pressurization was comprised by four pre–cycles under physiological venous loading (0 – 4kPa) and four cycles under nonphysiological loading (0 – 21kPa). Each overloading cycle was performed with different value of axial weight. Overloading data were used in regression analysis to fit material model. Considered model did not fit experimental data so good. Especially predictions of axial force failed. It was hypothesized that due to nonphysiological values of loading pressure and different values of axial weight the material was not preconditioned enough and some damage occurred inside the wall. A limiting fiber extensibility parameter Jm was assumed to be in relation to supposed damage. Each of overloading cycles was fitted separately with different values of Jm. Other parameters were held the same. This approach turned out to be successful. Variable value of Jm can describe changes in the axial force – axial stretch response and satisfy pressure – radius dependence simultaneously.Keywords: Constitutive model, damage, fiber reinforcedcomposite, limiting fiber extensibility, preconditioning, vena cavainferior.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1473149 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: Chirp signals, Image multiplexing, Image transformation, Linear canonical transform, Polynomial approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2129148 Investigation of Improved Chaotic Signal Tracking by Echo State Neural Networks and Multilayer Perceptron via Training of Extended Kalman Filter Approach
Authors: Farhad Asadi, S. Hossein Sadati
Abstract:
This paper presents a prediction performance of feedforward Multilayer Perceptron (MLP) and Echo State Networks (ESN) trained with extended Kalman filter. Feedforward neural networks and ESN are powerful neural networks which can track and predict nonlinear signals. However, their tracking performance depends on the specific signals or data sets, having the risk of instability accompanied by large error. In this study we explore this process by applying different network size and leaking rate for prediction of nonlinear or chaotic signals in MLP neural networks. Major problems of ESN training such as the problem of initialization of the network and improvement in the prediction performance are tackled. The influence of coefficient of activation function in the hidden layer and other key parameters are investigated by simulation results. Extended Kalman filter is employed in order to improve the sequential and regulation learning rate of the feedforward neural networks. This training approach has vital features in the training of the network when signals have chaotic or non-stationary sequential pattern. Minimization of the variance in each step of the computation and hence smoothing of tracking were obtained by examining the results, indicating satisfactory tracking characteristics for certain conditions. In addition, simulation results confirmed satisfactory performance of both of the two neural networks with modified parameterization in tracking of the nonlinear signals.Keywords: Feedforward neural networks, nonlinear signal prediction, echo state neural networks approach, leaking rates, capacity of neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 758147 Modeling of Surface Roughness for Flow over a Complex Vegetated Surface
Authors: Wichai Pattanapol, Sarah J. Wakes, Michael J. Hilton, Katharine J.M. Dickinson
Abstract:
Turbulence modeling of large-scale flow over a vegetated surface is complex. Such problems involve large scale computational domains, while the characteristics of flow near the surface are also involved. In modeling large scale flow, surface roughness including vegetation is generally taken into account by mean of roughness parameters in the modified law of the wall. However, the turbulence structure within the canopy region cannot be captured with this method, another method which applies source/sink terms to model plant drag can be used. These models have been developed and tested intensively but with a simple surface geometry. This paper aims to compare the use of roughness parameter, and additional source/sink terms in modeling the effect of plant drag on wind flow over a complex vegetated surface. The RNG k-ε turbulence model with the non-equilibrium wall function was tested with both cases. In addition, the k-ω turbulence model, which is claimed to be computationally stable, was also investigated with the source/sink terms. All numerical results were compared to the experimental results obtained at the study site Mason Bay, Stewart Island, New Zealand. In the near-surface region, it is found that the results obtained by using the source/sink term are more accurate than those using roughness parameters. The k-ω turbulence model with source/sink term is more appropriate as it is more accurate and more computationally stable than the RNG k-ε turbulence model. At higher region, there is no significant difference amongst the results obtained from all simulations.
Keywords: CFD, canopy flow, surface roughness, turbulence models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2963146 Novel Use of a Quality Assurance Tool for Integrating Technology to HSE
Authors: Ragi Poyyara, Vivek V., Ashish Khaparde
Abstract:
The product development process (PDP) in the Technology group plays a very important role in the launch of any product. While a manufacturing process encourages the use of certain measures to reduce health, safety and environmental (HSE) risks on the shop floor, the PDP concentrates on the use of Geometric Dimensioning and Tolerancing (GD&T) to develop a flawless design. Furthermore, PDP distributes and coordinates activities between different departments such as marketing, purchasing, and manufacturing. However, it is seldom realized that PDP makes a significant contribution to developing a product that reduces HSE risks by encouraging the Technology group to use effective GD&T. The GD&T is a precise communication tool that uses a set of symbols, rules, and definitions to mathematically define parts to be manufactured. It is a quality assurance method widely used in the oil and gas sector. Traditionally it is used to ensure the interchangeability of a part without affecting its form, fit, and function. Parts that do not meet these requirements are rejected during quality audits. This paper discusses how the Technology group integrates this quality assurance tool into the PDP and how the tool plays a major role in helping the HSE department in its goal towards eliminating HSE incidents. The PDP involves a thorough risk assessment and establishes a method to address those risks during the design stage. An illustration shows how GD&T helped reduce safety risks by ergonomically improving assembling operations. A brief discussion explains how tolerances provided on a part help prevent finger injury. This tool has equipped Technology to produce fixtures, which are used daily in operations as well as manufacturing. By applying GD&T to create good fits, HSE risks are mitigated for operating personnel. Both customers and service providers benefit from reduced safety risks.
Keywords: HSE, PDP, GD&T, risks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1834145 Structural Characteristics of HPDSP Concrete on Beam Column Joints
Authors: Sushil Kumar Swar, Sanjay Kumar Sharma, Hari Krishan Sharma, Sushil Kumar
Abstract:
The seriously damaged structures during earthquakes show the need and importance of design of reinforced concrete structures with high ductility. Reinforced concrete beam-column joints have an important function in all structures. Under seismic excitation, the beam column joint region is subjected to horizontal and vertical shear forces whose magnitude is many times higher than the adjacent beam and column. Strength and ductility of structures depends mainly on proper detailing of the reinforcement in beamcolumn joints and the old structures were found ductility deficient. DSP materials are obtained by using high quantities of super plasticizers and high volumes of micro silica. In the case of High Performance Densified Small Particle Concrete (HPDSPC), since concrete is dense even at the micro-structure level, tensile strain would be much higher than that of the conventional SFRC, SIFCON & SIMCON. This in turn will improve cracking behaviour, ductility and energy absorption capacity of composites in addition to durability. The fine fibers used in our mix are 0.3mm diameter and 10 mm which can be easily placed with high percentage. These fibers easily transfer stresses and act as a composite concrete unit to take up extremely high loads with high compressive strength. HPDSPC placed in the beam column joints helps in safety of human life due to prolonged failure.Keywords: High Performance Densified Small Particle Concrete (HPDSPC), Steel Fıber Reinforced Concrete (SFRC), Slurry Infiltrated Concrete (SIFCON), Slurry Infiltrated Mat Concrete (SIMCON).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2164144 Comparison of Polynomial and Radial Basis Kernel Functions based SVR and MLR in Modeling Mass Transfer by Vertical and Inclined Multiple Plunging Jets
Abstract:
Presently various computational techniques are used in modeling and analyzing environmental engineering data. In the present study, an intra-comparison of polynomial and radial basis kernel functions based on Support Vector Regression and, in turn, an inter-comparison with Multi Linear Regression has been attempted in modeling mass transfer capacity of vertical (θ = 90O) and inclined (θ multiple plunging jets (varying from 1 to 16 numbers). The data set used in this study consists of four input parameters with a total of eighty eight cases, forty four each for vertical and inclined multiple plunging jets. For testing, tenfold cross validation was used. Correlation coefficient values of 0.971 and 0.981 along with corresponding root mean square error values of 0.0025 and 0.0020 were achieved by using polynomial and radial basis kernel functions based Support Vector Regression respectively. An intra-comparison suggests improved performance by radial basis function in comparison to polynomial kernel based Support Vector Regression. Further, an inter-comparison with Multi Linear Regression (correlation coefficient = 0.973 and root mean square error = 0.0024) reveals that radial basis kernel functions based Support Vector Regression performs better in modeling and estimating mass transfer by multiple plunging jets.Keywords: Mass transfer, multiple plunging jets, polynomial and radial basis kernel functions, Support Vector Regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433143 Government of Ghana’s Budget: Its Functions, Coverage, Classification, and Integration with Chart of Accounts
Authors: Mohammed Sani Abdulai
Abstract:
Government budgets are the primary instruments for formulating and implementing a country’s fiscal policy objectives, development priorities, and the overall socio-economic aspirations of its people. Thus, in this paper, the author examined the Government of Ghana’s budgets with respect to their functions, coverage, classifications, and integration with the country’s chart of accounts. The author did so by amalgamating the research findings of extant literature with (a) the operational and procedural guidelines underpinning the formulation and execution of the government’s budgets; (b) the recommendations made by various development partners and thinktanks on reforming the country’s budgeting processes and procedures; and (c) the lessons Ghana could learn from the budget reform efforts of other countries. By way of research findings, the paper showed that the Government of Ghana’s budgets in terms of function are both eclectic and multidimensional. On coverage, the paper showed that the country’s budgets duly cover the revenues and expenditures of the general government (i.e., both the central and sub-national governments). Finally, on classifications, the paper noted with delight the Government of Ghana’s effort in providing classificatory codes to both its national development agenda and such international development goals as the AU’s Agenda 2063 and the UN’s Sustainable Development Goals. However, the paper found some significant lapses that require a complete overhaul and structuring on the integrations of its budget classifications with its chart of accounts. Thus, the paper concluded with a detailed examination of the challenges confronting the country’s current chart of accounts and recommendations for addressing them.
Keywords: Budget, budgetary transactions, budgetary governance, Chart of Accounts, classification, composition, coverage, Public Financial Management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 515142 Scatterer Density in Edge and Coherence Enhancing Nonlinear Anisotropic Diffusion for Medical Ultrasound Speckle Reduction
Authors: Ahmed Badawi, J. Michael Johnson, Mohamed Mahfouz
Abstract:
This paper proposes new enhancement models to the methods of nonlinear anisotropic diffusion to greatly reduce speckle and preserve image features in medical ultrasound images. By incorporating local physical characteristics of the image, in this case scatterer density, in addition to the gradient, into existing tensorbased image diffusion methods, we were able to greatly improve the performance of the existing filtering methods, namely edge enhancing (EE) and coherence enhancing (CE) diffusion. The new enhancement methods were tested using various ultrasound images, including phantom and some clinical images, to determine the amount of speckle reduction, edge, and coherence enhancements. Scatterer density weighted nonlinear anisotropic diffusion (SDWNAD) for ultrasound images consistently outperformed its traditional tensor-based counterparts that use gradient only to weight the diffusivity function. SDWNAD is shown to greatly reduce speckle noise while preserving image features as edges, orientation coherence, and scatterer density. SDWNAD superior performances over nonlinear coherent diffusion (NCD), speckle reducing anisotropic diffusion (SRAD), adaptive weighted median filter (AWMF), wavelet shrinkage (WS), and wavelet shrinkage with contrast enhancement (WSCE), make these methods ideal preprocessing steps for automatic segmentation in ultrasound imaging.Keywords: Nonlinear anisotropic diffusion, ultrasound imaging, speckle reduction, scatterer density estimation, edge based enhancement, coherence enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906141 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules
Authors: Tamanna Siddiqui, M. Afshar Alam
Abstract:
Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality
Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527140 An Improved Total Variation Regularization Method for Denoising Magnetocardiography
Authors: Yanping Liao, Congcong He, Ruigang Zhao
Abstract:
The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.Keywords: Constraint parameters, derivative matrix, magnetocardiography, regular term, total variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698139 Development of Sports Nation on the Way of Health Management
Authors: Beatrix Faragó, Zsolt Szakály, Ágnes Kovácsné Tóth, Csaba Konczos, Norbert Kovács, Zsófia Pápai, Tamás Kertész
Abstract:
The future of the nation is the embodiment of a healthy society. A key segment of government policy is the development of health and a health-oriented environment. As a result, sport as an activator of health is an important area for development. In Hungary, sport is a strategic sector with the aim of developing a sports nation. The function of sport in the global society is multifaceted, which is manifested in both social and economic terms. The economic importance of sport is gaining ground in the world, with implications for Central and Eastern Europe. Smaller states, such as Hungary, cannot ignore the economic effects of exploiting the effects of sport. The relationship between physical activity and health is driven by the health economy towards the nation's economic factor. In our research, we analyzed sport as a national strategy sector and its impact on age groups. By presenting the current state of health behavior, we get an idea of the directions where development opportunities require even more intervention. The foundation of the health of a nation is the young age group, whose shaping of health will shape the future generation. Our research was attended by university students from the Faculty of Health and Sports Sciences who will be experts in the field of health in the future. The other group is the elderly, who are a growing social group due to demographic change and are a key segment of the labor market and consumer society. Our study presents the health behavior of the two age groups, their differences, and similarities. The survey also identifies gaps in the development of a health management strategy that national strategies should take into account.
Keywords: Competitiveness, health behavior, health economy, health management, sports nation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1016138 The Formation of Mutual Understanding in Conversation: An Embodied Approach
Authors: Haruo Okabayashi
Abstract:
The mutual understanding in conversation is very important for human relations. This study investigates the mental function of the formation of mutual understanding between two people in conversation using the embodied approach. Forty people participated in this study. They are divided into pairs randomly. Four conversation situations between two (make/listen to fun or pleasant talk, make/listen to regrettable talk) are set for four minutes each, and the finger plethysmogram (200 Hz) of each participant is measured. As a result, the attractors of the participants who reported “I did not understand my partner” show the collapsed shape, which means the fluctuation of their rhythm is too small to match their partner’s rhythm, and their cross correlation is low. The autonomic balance of both persons tends to resonate during conversation, and both LLEs tend to resonate, too. In human history, in order for human beings as weak mammals to live, they may have been with others; that is, they have brought about resonating characteristics, which is called self-organization. However, the resonant feature sometimes collapses, depending on the lifestyle that the person was formed by himself after birth. It is difficult for people who do not have a lifestyle of mutual gaze to resonate their biological signal waves with others’. These people have features such as anxiety, fatigue, and confusion tendency. Mutual understanding is thought to be formed as a result of cooperation between the features of self-organization of the persons who are talking and the lifestyle indicated by mutual gaze. Such an entanglement phenomenon is called a nonlinear relation. By this research, it is found that the formation of mutual understanding is expressed by the rhythm of a biological signal showing a nonlinear relationship.
Keywords: Embodied approach, finger plethysmogram, mutual understanding, nonlinear phenomenon.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298137 Development of Electrospun Membranes with Defined Polyethylene Collagen and Oxide Architectures Reinforced with Medium and High Intensity Statins
Authors: S. Jaramillo, Y. Montoya, W. Agudelo, J. Bustamante
Abstract:
Cardiovascular diseases (CVD) are related to affectations of the heart and blood vessels, within these are pathologies such as coronary or peripheral heart disease, caused by the narrowing of the vessel wall (atherosclerosis), which is related to the accumulation of Low-Density Lipoproteins (LDL) in the arterial walls that leads to a progressive reduction of the lumen of the vessel and alterations in blood perfusion. Currently, the main therapeutic strategy for this type of alteration is drug treatment with statins, which inhibit the enzyme 3-hydroxy-3-methyl-glutaryl-CoA reductase (HMG-CoA reductase), responsible for modulating the rate of cholesterol production and other isoprenoids in the mevalonate pathway. This enzyme induces the expression of LDL receptors in the liver, increasing their number on the surface of liver cells, reducing the plasma concentration of cholesterol. On the other hand, when the blood vessel presents stenosis, a surgical procedure with vascular implants is indicated, which are used to restore circulation in the arterial or venous bed. Among the materials used for the development of vascular implants are Dacron® and Teflon®, which perform the function of re-waterproofing the circulatory circuit, but due to their low biocompatibility, they do not have the ability to promote remodeling and tissue regeneration processes. Based on this, the present research proposes the development of a hydrolyzed collagen and polyethylene oxide electrospun membrane reinforced with medium and high-intensity statins, so that in future research it can favor tissue remodeling processes from its microarchitecture.
Keywords: atherosclerosis, medium and high-intensity statins, microarchitecture, electrospun membrane
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 647136 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations
Authors: Satyanadh Gundimada, Vijayan K Asari
Abstract:
A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.
Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850135 An Autonomous Collaborative Forecasting System Implementation – The First Step towards Successful CPFR System
Authors: Chi-Fang Huang, Yun-Shiow Chen, Yun-Kung Chung
Abstract:
In the past decade, artificial neural networks (ANNs) have been regarded as an instrument for problem-solving and decision-making; indeed, they have already done with a substantial efficiency and effectiveness improvement in industries and businesses. In this paper, the Back-Propagation neural Networks (BPNs) will be modulated to demonstrate the performance of the collaborative forecasting (CF) function of a Collaborative Planning, Forecasting and Replenishment (CPFR®) system. CPFR functions the balance between the sufficient product supply and the necessary customer demand in a Supply and Demand Chain (SDC). Several classical standard BPN will be grouped, collaborated and exploited for the easy implementation of the proposed modular ANN framework based on the topology of a SDC. Each individual BPN is applied as a modular tool to perform the task of forecasting SKUs (Stock-Keeping Units) levels that are managed and supervised at a POS (point of sale), a wholesaler, and a manufacturer in an SDC. The proposed modular BPN-based CF system will be exemplified and experimentally verified using lots of datasets of the simulated SDC. The experimental results showed that a complex CF problem can be divided into a group of simpler sub-problems based on the single independent trading partners distributed over SDC, and its SKU forecasting accuracy was satisfied when the system forecasted values compared to the original simulated SDC data. The primary task of implementing an autonomous CF involves the study of supervised ANN learning methodology which aims at making “knowledgeable" decision for the best SKU sales plan and stocks management.Keywords: CPFR, artificial neural networks, global logistics, supply and demand chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1993134 Analysis and Control of Camera Type Weft Straightener
Authors: Jae-Yong Lee, Gyu-Hyun Bae, Yun-Soo Chung, Dae-Sub Kim, Jae-Sung Bae
Abstract:
In general, fabric is heat-treated using a stenter machine in order to dry and fix its shape. It is important to shape before the heat treatment because it is difficult to revert back once the fabric is formed. To produce the product of right shape, camera type weft straightener has been applied recently to capture and process fabric images quickly. It is more powerful in determining the final textile quality rather than photo-sensor. Positioning in front of a stenter machine, weft straightener helps to spread fabric evenly and control the angle between warp and weft constantly as right angle by handling skew and bow rollers. To process this tricky procedure, the structural analysis should be carried out in advance, based on which, its control technology can be drawn. A structural analysis is to figure out the specific contact/slippage characteristics between fabric and roller. We already examined the applicability of camera type weft straightener to plain weave fabric and found its possibility and the specific working condition of machine and rollers. In this research, we aimed to explore another applicability of camera type weft straightener. Namely, we tried to figure out camera type weft straightener can be used for fabrics. To find out the optimum condition, we increased the number of rollers. The analysis is done by ANSYS software using Finite Element Analysis method. The control function is demonstrated by experiment. In conclusion, the structural analysis of weft straightener is done to identify a specific characteristic between roller and fabrics. The control of skew and bow roller is done to decrease the error of the angle between warp and weft. Finally, it is proved that camera type straightener can also be used for the special fabrics.
Keywords: Camera type weft straightener, structure analysis, control, skew and bow roller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451133 Parameter Optimization and Thermal Simulation in Laser Joining of Coach Peel Panels of Dissimilar Materials
Authors: Masoud Mohammadpour, Blair Carlson, Radovan Kovacevic
Abstract:
The quality of laser welded-brazed (LWB) joints were strongly dependent on the main process parameters, therefore the effect of laser power (3.2–4 kW), welding speed (60–80 mm/s) and wire feed rate (70–90 mm/s) on mechanical strength and surface roughness were investigated in this study. The comprehensive optimization process by means of response surface methodology (RSM) and desirability function was used for multi-criteria optimization. The experiments were planned based on Box– Behnken design implementing linear and quadratic polynomial equations for predicting the desired output properties. Finally, validation experiments were conducted on an optimized process condition which exhibited good agreement between the predicted and experimental results. AlSi3Mn1 was selected as the filler material for joining aluminum alloy 6022 and hot-dip galvanized steel in coach peel configuration. The high scanning speed could control the thickness of IMC as thin as 5 µm. The thermal simulations of joining process were conducted by the Finite Element Method (FEM), and results were validated through experimental data. The Fe/Al interfacial thermal history evidenced that the duration of critical temperature range (700–900 °C) in this high scanning speed process was less than 1 s. This short interaction time leads to the formation of reaction-control IMC layer instead of diffusion-control mechanisms.
Keywords: Laser welding-brazing, finite element, response surface methodology, multi-response optimization, cross-beam laser.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 961132 Efficient Compact Micro DBD Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems
Authors: Kuvshinov, D., Siswanto, A., Lozano-Parada, J., Zimmerman, W. B.
Abstract:
Ozone is well known as a powerful, fast reacting oxidant. Ozone based processes produce no by-product residual as non-reacted ozone decomposes to molecular oxygen. Therefore an application of ozone is widely accepted as one of the main approaches for a Sustainable and Clean Technologies development.
There are number of technologies which require ozone to be delivered to specific points of a production network or reactors construction. Due to space constraints, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units.
Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented.
At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28*10-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure.
The MROG construction makes it applicable for both submerged and dry systems. With a robust compact design MROG can be used as an integrated module for production lines of high complexity.
Keywords: DBD, micro reactor, ozone, plasma.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3004131 Improvement of Overall Equipment Effectiveness through Total Productive Maintenance
Abstract:
Frequent machine breakdowns, low plant availability and increased overtime are a great threat to a manufacturing plant as they increase operating costs of an industry. The main aim of this study was to improve Overall Equipment Effectiveness (OEE) at a manufacturing company through the implementation of innovative maintenance strategies. A case study approach was used. The paper focuses on improving the maintenance in a manufacturing set up using an innovative maintenance regime mix to improve overall equipment effectiveness. Interviews, reviewing documentation and historical records, direct and participatory observation were used as data collection methods during the research. Usually production is based on the total kilowatt of motors produced per day. The target kilowatt at 91% availability is 75 Kilowatts a day. Reduced demand and lack of raw materials particularly imported items are adversely affecting the manufacturing operations. The company had to reset its targets from the usual figure of 250 Kilowatt per day to mere 75 per day due to lower availability of machines as result of breakdowns as well as lack of raw materials. The price reductions and uncertainties as well as general machine breakdowns further lowered production. Some recommendations were given. For instance, employee empowerment in the company will enhance responsibility and authority to improve and totally eliminate the six big losses. If the maintenance department is to realise its proper function in a progressive, innovative industrial society, then its personnel must be continuously trained to meet current needs as well as future requirements. To make the maintenance planning system effective, it is essential to keep track of all the corrective maintenance jobs and preventive maintenance inspections. For large processing plants these cannot be handled manually. It was therefore recommended that the company implement (Computerised Maintenance Management System) CMMS.
Keywords: Maintenance, Manufacturing, Overall Equipment Effectiveness
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3988130 The Micro Ecosystem Restoration Mechanism Applied for Feasible Research of Lakes Eutrophication Enhancement
Authors: Ching-Tsan Tsai, Sih-Rong Chen, Chi-Hung Hsieh
Abstract:
The technique of inducing micro ecosystem restoration is one of aquatic ecology engineering methods used to retrieve the polluted water. Batch scale study, pilot plant study, and field study were carried out to observe the eutrophication using the Inducing Ecology Restorative Symbiosis Agent (IERSA) consisting mainly degraded products by using lactobacillus, saccharomycete, and phycomycete. The results obtained from the experiments of the batch scale and pilot plant study allowed us to development the parameters for the field study. A pond, 5 m to the outlet of a lake, with an area of 500 m2 and depth of 0.6-1.2 m containing about 500 tons of water was selected as a model. After the treatment with 10 mg IERSA/L water twice a week for 70 days, the micro restoration mechanisms consisted of three stages (i.e., restoration, impact maintenance, and ecology recovery experiment after impact). The COD, TN, TKN, and chlorophyll a were reduced significantly in the first week. Although the unexpected heavy rain and contaminate from sewage system might slow the ecology restoration. However, the self-cleaning function continued and the chlorophyll a reduced for 50% in one month. In the 4th week, amoeba, paramecium, rotifer, and red wriggle worm reappeared, and the number of fish flies appeared up to1000 fish fries/m3. Those results proved that inducing restorative mechanism can be applied to improve the eutrophication and to control the growth of algae in the lakes by gaining the selfcleaning through inducing and competition of microbes. The situation for growth of fishes also can reach an excellent result due to the improvement of water quality.Keywords: Ecosystem restoration, eutrophication, lake.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1872129 Development of Circulating Support Environment of Multilingual Medical Communication using Parallel Texts for Foreign Patients
Authors: Mai Miyabe, Taku Fukushima, Takashi Yoshino, Aguri Shigeno
Abstract:
The need for multilingual communication in Japan has increased due to an increase in the number of foreigners in the country. When people communicate in their nonnative language, the differences in language prevent mutual understanding among the communicating individuals. In the medical field, communication between the hospital staff and patients is a serious problem. Currently, medical translators accompany patients to medical care facilities, and the demand for medical translators is increasing. However, medical translators cannot necessarily provide support, especially in cases in which round-the-clock support is required or in case of emergencies. The medical field has high expectations from information technology. Hence, a system that supports accurate multilingual communication is required. Despite recent advances in machine translation technology, it is very difficult to obtain highly accurate translations. We have developed a support system called M3 for multilingual medical reception. M3 provides support functions that aid foreign patients in the following respects: conversation, questionnaires, reception procedures, and hospital navigation; it also has a Q&A function. Users can operate M3 using a touch screen and receive text-based support. In addition, M3 uses accurate translation tools called parallel texts to facilitate reliable communication through conversations between the hospital staff and the patients. However, if there is no parallel text that expresses what users want to communicate, the users cannot communicate. In this study, we have developed a circulating support environment for multilingual medical communication using parallel texts. The proposed environment can circulate necessary parallel texts through the following procedure: (1) a user provides feedback about the necessary parallel texts, following which (2) these parallel texts are created and evaluated.Keywords: multilingual medical communication, parallel texts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483128 Advanced Stochastic Models for Partially Developed Speckle
Authors: Jihad S. Daba (Jean-Pierre Dubois), Philip Jreije
Abstract:
Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.Keywords: Doubly stochastic filtered process, Poisson point process, segmentation, speckle, ultrasound
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744127 A Refined Application of QFD in SCM, A New Approach
Authors: Nooshin La'l Mohamadi
Abstract:
Due to the fact that in the new century customers tend to express globally increasing demands, networks of interconnected businesses have been established in societies and the management of such networks seems to be a major key through gaining competitive advantages. Supply chain management encompasses such managerial activities. Within a supply chain, a critical role is played by quality. QFD is a widely-utilized tool which serves the purpose of not only bringing quality to the ultimate provision of products or service packages required by the end customer or the retailer, but it can also initiate us into a satisfactory relationship with our initial customer; that is the wholesaler. However, the wholesalers- cooperation is considerably based on the capabilities that are heavily dependent on their locations and existing circumstances. Therefore, it is undeniable that for all companies each wholesaler possesses a specific importance ratio which can heavily influence the figures calculated in the House of Quality in QFD. Moreover, due to the competitiveness of the marketplace today, it-s been widely recognized that consumers- expression of demands has been highly volatile in periods of production. Apparently, such instability and proneness to change has been very tangibly noticed and taking it into account during the analysis of HOQ is widely influential and doubtlessly required. For a more reliable outcome in such matters, this article demonstrates the application viability of Analytic Network Process for considering the wholesalers- reputation and simultaneously introduces a mortality coefficient for the reliability and stability of the consumers- expressed demands in course of time. Following to this, the paper provides further elaboration on the relevant contributory factors and approaches through the calculation of such coefficients. In the end, the article concludes that an empirical application is needed to achieve broader validity.Keywords: Analytic Network Process, Quality Function Deployment, QFD flaws, Supply Chain Management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1426126 Toward Discovering an Architectural Typology Based on the Theory of Affordance
Authors: Falntina Ahmad Alata, Natheer Abu Obeid
Abstract:
This paper revolves around the concept of affordance. It aims to discover and develop an architectural typology based on the ecological concept of affordance. In order to achieve this aim, an analytical study is conducted and two sources were taken into account: 1- Gibson's definition of the concept of affordance and 2- The researches that are concerned on the affordance categorisation. As a result, this paper concluded 16 typologies of affordances, including the possibilities of mixing them based on both sources. To clarify these typologies and provide further understanding, a wide range of architectural examples are presented and proposed in the paper. To prove this vocabulary’s capability to diagnose and evaluate the affordance of different environments, an experimental study with two processes have been adapted: 1. Diagnostic process: the interpretation of the environments with regards to its affordance by using the new vocabulary (the developed typologies). 2. Evaluating process: the evaluation of the environments that have been interpreted and classified with regards to their affordances. By using the measures of emotional experience (the positive affect ‘PA’ and the negative affect ‘NA’) and the architectural evaluation criteria (beauty, economy and function). The experimental study proves that the typologies are capable of reading the affordance within different environments. Additionally, it explains how these different typologies reflect different interactions based on the previous processes. The data which are concluded from the evaluation of measures explain how different typologies of affordance that have already reflected different environments had different evaluations. In fact, some of them are recommended while the others are not. In other words, the paper draws a roadmap for designers to diagnose, evaluate and analyse the affordance into different architectural environments. After that, it guides them through adapting the best interaction (affordance category), which they intend to adapt into their proposed designs.Keywords: Affordance theory, affordance categories, architectural environments, architectural evaluation criteria, emotional experience.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 618