Search results for: Average Mean Distance function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4149

Search results for: Average Mean Distance function

279 Fractal Analysis of 16S rRNA Gene Sequences in Archaea Thermophiles

Authors: T. Holden, G. Tremberger, Jr, E. Cheung, R. Subramaniam, R. Sullivan, N. Gadura, P. Schneider, P. Marchese, A. Flamholz, T. Cheung, D. Lieberman

Abstract:

A nucleotide sequence can be expressed as a numerical sequence when each nucleotide is assigned its proton number. A resulting gene numerical sequence can be investigated for its fractal dimension in terms of evolution and chemical properties for comparative studies. We have investigated such nucleotide fluctuation in the 16S rRNA gene of archaea thermophiles. The studied archaea thermophiles were archaeoglobus fulgidus, methanothermobacter thermautotrophicus, methanocaldococcus jannaschii, pyrococcus horikoshii, and thermoplasma acidophilum. The studied five archaea-euryarchaeota thermophiles have fractal dimension values ranging from 1.93 to 1.97. Computer simulation shows that random sequences would have an average of about 2 with a standard deviation about 0.015. The fractal dimension was found to correlate (negative correlation) with the thermophile-s optimal growth temperature with R2 value of 0.90 (N =5). The inclusion of two aracheae-crenarchaeota thermophiles reduces the R2 value to 0.66 (N = 7). Further inclusion of two bacterial thermophiles reduces the R2 value to 0.50 (N =9). The fractal dimension is correlated (positive) to the sequence GC content with an R2 value of 0.89 for the five archaea-euryarchaeota thermophiles (and 0.74 for the entire set of N = 9), although computer simulation shows little correlation. The highest correlation (positive) was found to be between the fractal dimension and di-nucleotide Shannon entropy. However Shannon entropy and sequence GC content were observed to correlate with optimal growth temperature having an R2 of 0.8 (negative), and 0.88 (positive), respectively, for the entire set of 9 thermophiles; thus the correlation lacks species specificity. Together with another correlation study of bacterial radiation dosage with RecA repair gene sequence fractal dimension, it is postulated that fractal dimension analysis is a sensitive tool for studying the relationship between genotype and phenotype among closely related sequences.

Keywords: Fractal dimension, archaea thermophiles, Shannon entropy, GC content

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756
278 Study of Equilibrium and Mass Transfer of Co- Extraction of Different Mineral Acids with Iron(III) from Aqueous Solution by Tri-n-Butyl Phosphate Using Liquid Membrane

Authors: Diptendu Das, Vikas Kumar Rahi, V. A. Juvekar, R. Bhattacharya

Abstract:

Extraction of Fe(III) from aqueous solution using Trin- butyl Phosphate (TBP) as carrier needs a highly acidic medium (>6N) as it favours formation of chelating complex FeCl3.TBP. Similarly, stripping of Iron(III) from loaded organic solvents requires neutral pH or alkaline medium to dissociate the same complex. It is observed that TBP co-extracts acids along with metal, which causes reversal of driving force of extraction and iron(III) is re-extracted back from the strip phase into the feed phase during Liquid Emulsion Membrane (LEM) pertraction. Therefore, rate of extraction of different mineral acids (HCl, HNO3, H2SO4) using TBP with and without presence of metal Fe(III) was examined. It is revealed that in presence of metal acid extraction is enhanced. Determination of mass transfer coefficient of both acid and metal extraction was performed by using Bulk Liquid Membrane (BLM). The average mass transfer coefficient was obtained by fitting the derived model equation with experimentally obtained data. The mass transfer coefficient of the mineral acid extraction is in the order of kHNO3 = 3.3x10-6m/s > kHCl = 6.05x10-7m/s > kH2SO4 = 1.85x10-7m/s. The distribution equilibria of the above mentioned acids between aqueous feed solution and a solution of tri-n-butyl-phosphate (TBP) in organic solvents have been investigated. The stoichiometry of acid extraction reveals the formation of TBP.2HCl, HNO3.2TBP, and TBP.H2SO4 complexes. Moreover, extraction of Iron(III) by TBP in HCl aqueous solution forms complex FeCl3.TBP.2HCl while in HNO3 medium forms complex 3FeCl3.TBP.2HNO3

Keywords: Bulk Liquid Membrane (BLM) Transport, Iron(III) extraction, Tri-n-butyl Phosphate, Mass Transfer coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2561
277 General Formula for Water Surface Profile over Side Weir in the Combined, Trapezoidal and Exponential, Channels

Authors: Abdulrahman Abdulrahman

Abstract:

A side weir is a hydraulic structure set into the side of a channel. This structure is used for water level control in channels, to divert flow from a main channel into a side channel when the water level in the main channel exceeds a specific limit and as storm overflows from urban sewerage system. Computation of water surface over the side weirs is essential to determine the flow rate of the side weir. Analytical solutions for water surface profile along rectangular side weir are available only for the special cases of rectangular and trapezoidal channels considering constant specific energy. In this paper, a rectangular side weir located in a combined (trapezoidal with exponential) channel was considered. Expanding binominal series of integer and fraction powers and the using of reduction formula of cosine function integrals, a general analytical formula was obtained for water surface profile along a side weir in a combined (trapezoidal with exponential) channel. Since triangular, rectangular, trapezoidal and parabolic cross-sections are special cases of the combined cross section, the derived formula, is applicable to triangular, rectangular, trapezoidal cross-sections as analytical solution and semi-analytical solution to parabolic cross-section with maximum relative error smaller than 0.76%. The proposed solution should be a useful engineering tool for the evaluation and design of side weirs in open channel.

Keywords: Analytical solution, combined channel, exponential channel, side weirs, trapezoidal channel, water surface profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 894
276 Time Series Forecasting Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed length window in the past as an explicit input. In this paper, we study how the performance of predictive models change as a function of different look-back window sizes and different amounts of time to predict into the future. We also consider the performance of the recent attention-based transformer models, which had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), Gated Recurrent Units (GRU), and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the website of University of California, Irvine (UCI), which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean   Absolute Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: Air quality prediction, deep learning algorithms, time series forecasting, look-back window.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1085
275 On the AC-Side Interface Filter in Three-Phase Shunt Active Power Filter Systems

Authors: Mihaela Popescu, Alexandru Bitoleanu, Mircea Dobriceanu

Abstract:

The proper selection of the AC-side passive filter interconnecting the voltage source converter to the power supply is essential to obtain satisfactory performances of an active power filter system. The use of the LCL-type filter has the advantage of eliminating the high frequency switching harmonics in the current injected into the power supply. This paper is mainly focused on analyzing the influence of the interface filter parameters on the active filtering performances. Some design aspects are pointed out. Thus, the design of the AC interface filter starts from transfer functions by imposing the filter performance which refers to the significant current attenuation of the switching harmonics without affecting the harmonics to be compensated. A Matlab/Simulink model of the entire active filtering system including a concrete nonlinear load has been developed to examine the system performances. It is shown that a gamma LC filter could accomplish the attenuation requirement of the current provided by converter. Moreover, the existence of an optimal value of the grid-side inductance which minimizes the total harmonic distortion factor of the power supply current is pointed out. Nevertheless, a small converter-side inductance and a damping resistance in series with the filter capacitance are absolutely needed in order to keep the ripple and oscillations of the current at the converter side within acceptable limits. The effect of change in the LCL-filter parameters is evaluated. It is concluded that good active filtering performances can be achieved with small values of the capacitance and converter-side inductance.

Keywords: Active power filter, LCL filter, Matlab/Simulinkmodeling, Passive filters, Transfer function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2991
274 Simulation of Dynamic Behavior of Seismic Isolators Using a Parallel Elasto-Plastic Model

Authors: Nicolò Vaiana, Giorgio Serino

Abstract:

In this paper, a one-dimensional (1d) Parallel Elasto- Plastic Model (PEPM), able to simulate the uniaxial dynamic behavior of seismic isolators having a continuously decreasing tangent stiffness with increasing displacement, is presented. The parallel modeling concept is applied to discretize the continuously decreasing tangent stiffness function, thus allowing to simulate the dynamic behavior of seismic isolation bearings by putting linear elastic and nonlinear elastic-perfectly plastic elements in parallel. The mathematical model has been validated by comparing the experimental force-displacement hysteresis loops, obtained testing a helical wire rope isolator and a recycled rubber-fiber reinforced bearing, with those predicted numerically. Good agreement between the simulated and experimental results shows that the proposed model can be an effective numerical tool to predict the forcedisplacement relationship of seismic isolators within relatively large displacements. Compared to the widely used Bouc-Wen model, the proposed one allows to avoid the numerical solution of a first order ordinary nonlinear differential equation for each time step of a nonlinear time history analysis, thus reducing the computation effort, and requires the evaluation of only three model parameters from experimental tests, namely the initial tangent stiffness, the asymptotic tangent stiffness, and a parameter defining the transition from the initial to the asymptotic tangent stiffness.

Keywords: Base isolation, earthquake engineering, parallel elasto-plastic model, seismic isolators, softening hysteresis loops.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014
273 Limiting Fiber Extensibility as Parameter for Damage in Venous Wall

Authors: Lukas Horny, Rudolf Zitny, Hynek Chlup, Tomas Adamek, Michal Sara

Abstract:

An inflation–extension test with human vena cava inferior was performed with the aim to fit a material model. The vein was modeled as a thick–walled tube loaded by internal pressure and axial force. The material was assumed to be an incompressible hyperelastic fiber reinforced continuum. Fibers are supposed to be arranged in two families of anti–symmetric helices. Considered anisotropy corresponds to local orthotropy. Used strain energy density function was based on a concept of limiting strain extensibility. The pressurization was comprised by four pre–cycles under physiological venous loading (0 – 4kPa) and four cycles under nonphysiological loading (0 – 21kPa). Each overloading cycle was performed with different value of axial weight. Overloading data were used in regression analysis to fit material model. Considered model did not fit experimental data so good. Especially predictions of axial force failed. It was hypothesized that due to nonphysiological values of loading pressure and different values of axial weight the material was not preconditioned enough and some damage occurred inside the wall. A limiting fiber extensibility parameter Jm was assumed to be in relation to supposed damage. Each of overloading cycles was fitted separately with different values of Jm. Other parameters were held the same. This approach turned out to be successful. Variable value of Jm can describe changes in the axial force – axial stretch response and satisfy pressure – radius dependence simultaneously.

Keywords: Constitutive model, damage, fiber reinforcedcomposite, limiting fiber extensibility, preconditioning, vena cavainferior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1453
272 Optimal Image Representation for Linear Canonical Transform Multiplexing

Authors: Navdeep Goel, Salvador Gabarda

Abstract:

Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.

Keywords: Chirp signals, Image multiplexing, Image transformation, Linear canonical transform, Polynomial approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2103
271 Changing Geomorphosites in a Changing Lake: How Environmental Changes in Urmia Lake Have Been Driving Vanishing or Creating of Geomorphosites

Authors: D. Mokhtari

Abstract:

Any variation in environmental characteristics of geomorphosites would lead to destabilisation of their geotouristic values all around the planet. The Urmia lake, with an area of approximately 5,500 km2 and a catchment area of 51,876 km2, and to which various reasons over time, especially in the last fifty years have seen a sharp decline and have decreased by about 93 % in two recent decades. These variations are not only driving significant changes in the morphology and ecology of the present lake landscape, but at the same time are shaping newly formed morphologies, which vanished some valuable geomorphosites or develop into smaller geomorphosites with significant value from a scientific and cultural point of view. This paper analyses and discusses features and evolution in several representative coastal and island geomorphosites. For this purpose, a total of 23 geomorphosites were studied in two data series (1963 and 2015) and the respective data were compared and analysed. The results showed, the total loss in geomorphosites area in a half century amounted to a loss of more than 90% of the valuable geomorphosites. Moreover, the comparison between the mean yearly value of coastal area lost over the entire period and the yearly average calculated for the shorter period (1998- 2014) clearly indicates a pattern of acceleration. This acceleration in the rate of reduction in lake area was seen in most of the southern half of the lake. In the region as well, the general water-level falling is not only causing the loss of a significant water resource, which is followed by major impact on regional ecosystems, but is also driving the most marked recent (last century) changes in the geotouristic landscapes. In fact, the disappearance of geomorphosites means the loss of tourism phenomenon. In this context attention must be paid to the question of conservation. The action needed to safeguard geomorphosites includes: 1) Preventive action, 2) Corrective action, and 3) Sharing knowledge.

Keywords: Changing lake, environmental changes, geomorphosite, northwest of Iran, Urmia lake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
270 Investigation of Improved Chaotic Signal Tracking by Echo State Neural Networks and Multilayer Perceptron via Training of Extended Kalman Filter Approach

Authors: Farhad Asadi, S. Hossein Sadati

Abstract:

This paper presents a prediction performance of feedforward Multilayer Perceptron (MLP) and Echo State Networks (ESN) trained with extended Kalman filter. Feedforward neural networks and ESN are powerful neural networks which can track and predict nonlinear signals. However, their tracking performance depends on the specific signals or data sets, having the risk of instability accompanied by large error. In this study we explore this process by applying different network size and leaking rate for prediction of nonlinear or chaotic signals in MLP neural networks. Major problems of ESN training such as the problem of initialization of the network and improvement in the prediction performance are tackled. The influence of coefficient of activation function in the hidden layer and other key parameters are investigated by simulation results. Extended Kalman filter is employed in order to improve the sequential and regulation learning rate of the feedforward neural networks. This training approach has vital features in the training of the network when signals have chaotic or non-stationary sequential pattern. Minimization of the variance in each step of the computation and hence smoothing of tracking were obtained by examining the results, indicating satisfactory tracking characteristics for certain conditions. In addition, simulation results confirmed satisfactory performance of both of the two neural networks with modified parameterization in tracking of the nonlinear signals.

Keywords: Feedforward neural networks, nonlinear signal prediction, echo state neural networks approach, leaking rates, capacity of neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 736
269 A Comparative Study of Indoor Radon Concentrations between Dwellings and Workplaces in the Ko Samui District, Surat Thani Province, Southern Thailand

Authors: Kanokkan Titipornpun, Tripob Bhongsuwan, Jan Gimsa

Abstract:

The Ko Samui district of Surat Thani province is located in the high amounts of equivalent uranium in the ground surface that is the source of radon. Our research in the Ko Samui district aimed at comparing the indoor radon concentrations between dwellings and workplaces. Measurements of indoor radon concentrations were carried out in 46 dwellings and 127 workplaces, using CR-39 alpha-track detectors in closed-cup. A total of 173 detectors were distributed in 7 sub-districts. The detectors were placed in bedrooms of dwellings and workrooms of workplaces. All detectors were exposed to airborne radon for 90 days. After exposure, the alpha tracks were made visible by chemical etching before they were manually counted under an optical microscope. The track densities were assumed to be correlated with the radon concentration levels. We found that the radon concentrations could be well described by a log-normal distribution. Most concentrations (37%) were found in the range between 16 and 30 Bq.m-3. The radon concentrations in dwellings and workplaces varied from a minimum of 11 Bq.m-3 to a maximum of 305 Bq.m-3. The minimum (11 Bq.m-3) and maximum (305 Bq.m-3) values of indoor radon concentrations were found in a workplace and a dwelling, respectively. Only for four samples (3%), the indoor radon concentrations were found to be higher than the reference level recommended by the WHO (100 Bq.m-3). The overall geometric mean in the surveyed area was 32.6±1.65 Bq.m-3, which was lower than the worldwide average (39 Bq.m-3). The statistic comparison of the geometric mean indoor radon concentrations between dwellings and workplaces showed that the geometric mean in dwellings (46.0±1.55 Bq.m-3) was significantly higher than in workplaces (28.8±1.58 Bq.m-3) at the 0.05 level. Moreover, our study found that the majority of the bedrooms in dwellings had a closed atmosphere, resulting in poorer ventilation than in most of the workplaces that had access to air flow through open doors and windows at daytime. We consider this to be the main reason for the higher geometric mean indoor radon concentration in dwellings compared to workplaces.

Keywords: CR-39 detector, indoor radon, radon in dwelling, radon in workplace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 905
268 Modeling of Surface Roughness for Flow over a Complex Vegetated Surface

Authors: Wichai Pattanapol, Sarah J. Wakes, Michael J. Hilton, Katharine J.M. Dickinson

Abstract:

Turbulence modeling of large-scale flow over a vegetated surface is complex. Such problems involve large scale computational domains, while the characteristics of flow near the surface are also involved. In modeling large scale flow, surface roughness including vegetation is generally taken into account by mean of roughness parameters in the modified law of the wall. However, the turbulence structure within the canopy region cannot be captured with this method, another method which applies source/sink terms to model plant drag can be used. These models have been developed and tested intensively but with a simple surface geometry. This paper aims to compare the use of roughness parameter, and additional source/sink terms in modeling the effect of plant drag on wind flow over a complex vegetated surface. The RNG k-ε turbulence model with the non-equilibrium wall function was tested with both cases. In addition, the k-ω turbulence model, which is claimed to be computationally stable, was also investigated with the source/sink terms. All numerical results were compared to the experimental results obtained at the study site Mason Bay, Stewart Island, New Zealand. In the near-surface region, it is found that the results obtained by using the source/sink term are more accurate than those using roughness parameters. The k-ω turbulence model with source/sink term is more appropriate as it is more accurate and more computationally stable than the RNG k-ε turbulence model. At higher region, there is no significant difference amongst the results obtained from all simulations.

Keywords: CFD, canopy flow, surface roughness, turbulence models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2917
267 Novel Use of a Quality Assurance Tool for Integrating Technology to HSE

Authors: Ragi Poyyara, Vivek V., Ashish Khaparde

Abstract:

The product development process (PDP) in the Technology group plays a very important role in the launch of any product. While a manufacturing process encourages the use of certain measures to reduce health, safety and environmental (HSE) risks on the shop floor, the PDP concentrates on the use of Geometric Dimensioning and Tolerancing (GD&T) to develop a flawless design. Furthermore, PDP distributes and coordinates activities between different departments such as marketing, purchasing, and manufacturing. However, it is seldom realized that PDP makes a significant contribution to developing a product that reduces HSE risks by encouraging the Technology group to use effective GD&T. The GD&T is a precise communication tool that uses a set of symbols, rules, and definitions to mathematically define parts to be manufactured. It is a quality assurance method widely used in the oil and gas sector. Traditionally it is used to ensure the interchangeability of a part without affecting its form, fit, and function. Parts that do not meet these requirements are rejected during quality audits. This paper discusses how the Technology group integrates this quality assurance tool into the PDP and how the tool plays a major role in helping the HSE department in its goal towards eliminating HSE incidents. The PDP involves a thorough risk assessment and establishes a method to address those risks during the design stage. An illustration shows how GD&T helped reduce safety risks by ergonomically improving assembling operations. A brief discussion explains how tolerances provided on a part help prevent finger injury. This tool has equipped Technology to produce fixtures, which are used daily in operations as well as manufacturing. By applying GD&T to create good fits, HSE risks are mitigated for operating personnel. Both customers and service providers benefit from reduced safety risks.

Keywords: HSE, PDP, GD&T, risks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1806
266 Systematic Analysis of Dynamic Association of Health Outcomes with Computer Usage for Office Staff

Authors: Xiaoshu Lu, Esa-Pekka Takala, Risto Toivonen

Abstract:

This paper systematically investigates the timedependent health outcomes for office staff during computer work using the developed mathematical model. The model describes timedependent health outcomes in multiple body regions associated with computer usage. The association is explicitly presented with a doseresponse relationship which is parametrized by body region parameters. Using the developed model we perform extensive investigations of the health outcomes statically and dynamically. We compare the risk body regions and provide various severity rankings of the discomfort rate changes with respect to computer-related workload dynamically for the study population. Application of the developed model reveals a wide range of findings. Such broad spectrum of investigations in a single report literature is lacking. Based upon the model analysis, it is discovered that the highest average severity level of the discomfort exists in neck, shoulder, eyes, shoulder joint/upper arm, upper back, low back and head etc. The biggest weekly changes of discomfort rates are in eyes, neck, head, shoulder, shoulder joint/upper arm and upper back etc. The fastest discomfort rate is found in neck, followed by shoulder, eyes, head, shoulder joint/upper arm and upper back etc. Most of our findings are consistent with the literature, which demonstrates that the developed model and results are applicable and valuable and can be utilized to assess correlation between the amount of computer-related workload and health risk.

Keywords: Computer-related workload, health outcomes, dynamic association, dose-response relationship, systematic analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1269
265 Structural Characteristics of HPDSP Concrete on Beam Column Joints

Authors: Sushil Kumar Swar, Sanjay Kumar Sharma, Hari Krishan Sharma, Sushil Kumar

Abstract:

The seriously damaged structures during earthquakes show the need and importance of design of reinforced concrete structures with high ductility. Reinforced concrete beam-column joints have an important function in all structures. Under seismic excitation, the beam column joint region is subjected to horizontal and vertical shear forces whose magnitude is many times higher than the adjacent beam and column. Strength and ductility of structures depends mainly on proper detailing of the reinforcement in beamcolumn joints and the old structures were found ductility deficient. DSP materials are obtained by using high quantities of super plasticizers and high volumes of micro silica. In the case of High Performance Densified Small Particle Concrete (HPDSPC), since concrete is dense even at the micro-structure level, tensile strain would be much higher than that of the conventional SFRC, SIFCON & SIMCON. This in turn will improve cracking behaviour, ductility and energy absorption capacity of composites in addition to durability. The fine fibers used in our mix are 0.3mm diameter and 10 mm which can be easily placed with high percentage. These fibers easily transfer stresses and act as a composite concrete unit to take up extremely high loads with high compressive strength. HPDSPC placed in the beam column joints helps in safety of human life due to prolonged failure.

Keywords: High Performance Densified Small Particle Concrete (HPDSPC), Steel Fıber Reinforced Concrete (SFRC), Slurry Infiltrated Concrete (SIFCON), Slurry Infiltrated Mat Concrete (SIMCON).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2138
264 Comparison of Polynomial and Radial Basis Kernel Functions based SVR and MLR in Modeling Mass Transfer by Vertical and Inclined Multiple Plunging Jets

Authors: S. Deswal, M. Pal

Abstract:

Presently various computational techniques are used in modeling and analyzing environmental engineering data. In the present study, an intra-comparison of polynomial and radial basis kernel functions based on Support Vector Regression and, in turn, an inter-comparison with Multi Linear Regression has been attempted in modeling mass transfer capacity of vertical (θ = 90O) and inclined (θ multiple plunging jets (varying from 1 to 16 numbers). The data set used in this study consists of four input parameters with a total of eighty eight cases, forty four each for vertical and inclined multiple plunging jets. For testing, tenfold cross validation was used. Correlation coefficient values of 0.971 and 0.981 along with corresponding root mean square error values of 0.0025 and 0.0020 were achieved by using polynomial and radial basis kernel functions based Support Vector Regression respectively. An intra-comparison suggests improved performance by radial basis function in comparison to polynomial kernel based Support Vector Regression. Further, an inter-comparison with Multi Linear Regression (correlation coefficient = 0.973 and root mean square error = 0.0024) reveals that radial basis kernel functions based Support Vector Regression performs better in modeling and estimating mass transfer by multiple plunging jets.

Keywords: Mass transfer, multiple plunging jets, polynomial and radial basis kernel functions, Support Vector Regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
263 Government of Ghana’s Budget: Its Functions, Coverage, Classification, and Integration with Chart of Accounts

Authors: Mohammed Sani Abdulai

Abstract:

Government budgets are the primary instruments for formulating and implementing a country’s fiscal policy objectives, development priorities, and the overall socio-economic aspirations of its people. Thus, in this paper, the author examined the Government of Ghana’s budgets with respect to their functions, coverage, classifications, and integration with the country’s chart of accounts. The author did so by amalgamating the research findings of extant literature with (a) the operational and procedural guidelines underpinning the formulation and execution of the government’s budgets; (b) the recommendations made by various development partners and thinktanks on reforming the country’s budgeting processes and procedures; and (c) the lessons Ghana could learn from the budget reform efforts of other countries. By way of research findings, the paper showed that the Government of Ghana’s budgets in terms of function are both eclectic and multidimensional. On coverage, the paper showed that the country’s budgets duly cover the revenues and expenditures of the general government (i.e., both the central and sub-national governments). Finally, on classifications, the paper noted with delight the Government of Ghana’s effort in providing classificatory codes to both its national development agenda and such international development goals as the AU’s Agenda 2063 and the UN’s Sustainable Development Goals. However, the paper found some significant lapses that require a complete overhaul and structuring on the integrations of its budget classifications with its chart of accounts. Thus, the paper concluded with a detailed examination of the challenges confronting the country’s current chart of accounts and recommendations for addressing them.

Keywords: Budget, budgetary transactions, budgetary governance, Chart of Accounts, classification, composition, coverage, Public Financial Management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 469
262 Scatterer Density in Edge and Coherence Enhancing Nonlinear Anisotropic Diffusion for Medical Ultrasound Speckle Reduction

Authors: Ahmed Badawi, J. Michael Johnson, Mohamed Mahfouz

Abstract:

This paper proposes new enhancement models to the methods of nonlinear anisotropic diffusion to greatly reduce speckle and preserve image features in medical ultrasound images. By incorporating local physical characteristics of the image, in this case scatterer density, in addition to the gradient, into existing tensorbased image diffusion methods, we were able to greatly improve the performance of the existing filtering methods, namely edge enhancing (EE) and coherence enhancing (CE) diffusion. The new enhancement methods were tested using various ultrasound images, including phantom and some clinical images, to determine the amount of speckle reduction, edge, and coherence enhancements. Scatterer density weighted nonlinear anisotropic diffusion (SDWNAD) for ultrasound images consistently outperformed its traditional tensor-based counterparts that use gradient only to weight the diffusivity function. SDWNAD is shown to greatly reduce speckle noise while preserving image features as edges, orientation coherence, and scatterer density. SDWNAD superior performances over nonlinear coherent diffusion (NCD), speckle reducing anisotropic diffusion (SRAD), adaptive weighted median filter (AWMF), wavelet shrinkage (WS), and wavelet shrinkage with contrast enhancement (WSCE), make these methods ideal preprocessing steps for automatic segmentation in ultrasound imaging.

Keywords: Nonlinear anisotropic diffusion, ultrasound imaging, speckle reduction, scatterer density estimation, edge based enhancement, coherence enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1882
261 Clustering for Detection of Population Groups at Risk from Anticholinergic Medication

Authors: Amirali Shirazibeheshti, Tarik Radwan, Alireza Ettefaghian, Farbod Khanizadeh, George Wilson, Cristina Luca

Abstract:

Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. This work evaluates the association between the average risk score and measures of socioeconomic status (index of multiple deprivation) and health (index of health and disability). The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, suggesting that females are more at risk from this kind of multiple medication. The risk may be monitored and controlled in a healthcare management system that is well-equipped with tools implementing appropriate techniques of artificial intelligence.

Keywords: Anticholinergic medication, socioeconomic status, deprivation, clustering, risk analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1031
260 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Authors: Tamanna Siddiqui, M. Afshar Alam

Abstract:

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503
259 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: Constraint parameters, derivative matrix, magnetocardiography, regular term, total variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 656
258 Development of Sports Nation on the Way of Health Management

Authors: Beatrix Faragó, Zsolt Szakály, Ágnes Kovácsné Tóth, Csaba Konczos, Norbert Kovács, Zsófia Pápai, Tamás Kertész

Abstract:

The future of the nation is the embodiment of a healthy society. A key segment of government policy is the development of health and a health-oriented environment. As a result, sport as an activator of health is an important area for development. In Hungary, sport is a strategic sector with the aim of developing a sports nation. The function of sport in the global society is multifaceted, which is manifested in both social and economic terms. The economic importance of sport is gaining ground in the world, with implications for Central and Eastern Europe. Smaller states, such as Hungary, cannot ignore the economic effects of exploiting the effects of sport. The relationship between physical activity and health is driven by the health economy towards the nation's economic factor. In our research, we analyzed sport as a national strategy sector and its impact on age groups. By presenting the current state of health behavior, we get an idea of the directions where development opportunities require even more intervention. The foundation of the health of a nation is the young age group, whose shaping of health will shape the future generation. Our research was attended by university students from the Faculty of Health and Sports Sciences who will be experts in the field of health in the future. The other group is the elderly, who are a growing social group due to demographic change and are a key segment of the labor market and consumer society. Our study presents the health behavior of the two age groups, their differences, and similarities. The survey also identifies gaps in the development of a health management strategy that national strategies should take into account.

Keywords: Competitiveness, health behavior, health economy, health management, sports nation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 972
257 Evaluation of the Analytic for Hemodynamic Instability as A Prediction Tool for Early Identification of Patient Deterioration

Authors: Bryce Benson, Sooin Lee, Ashwin Belle

Abstract:

Unrecognized or delayed identification of patient deterioration is a key cause of in-hospitals adverse events. Clinicians rely on vital signs monitoring to recognize patient deterioration. However, due to ever increasing nursing workloads and the manual effort required, vital signs tend to be measured and recorded intermittently, and inconsistently causing large gaps during patient monitoring. Additionally, during deterioration, the body’s autonomic nervous system activates compensatory mechanisms causing the vital signs to be lagging indicators of underlying hemodynamic decline. This study analyzes the predictive efficacy of the Analytic for Hemodynamic Instability (AHI) system, an automated tool that was designed to help clinicians in early identification of deteriorating patients. The lead time analysis in this retrospective observational study assesses how far in advance AHI predicted deterioration prior to the start of an episode of hemodynamic instability (HI) becoming evident through vital signs? Results indicate that of the 362 episodes of HI in this study, 308 episodes (85%) were correctly predicted by the AHI system with a median lead time of 57 minutes and an average of 4 hours (240.5 minutes). Of the 54 episodes not predicted, AHI detected 45 of them while the episode of HI was ongoing. Of the 9 undetected, 5 were not detected by AHI due to either missing or noisy input ECG data during the episode of HI. In total, AHI was able to either predict or detect 98.9% of all episodes of HI in this study. These results suggest that AHI could provide an additional ‘pair of eyes’ on patients, continuously filling the monitoring gaps and consequently giving the patient care team the ability to be far more proactive in patient monitoring and adverse event management.

Keywords: Clinical deterioration prediction, decision support system, early warning system, hemodynamic status, physiologic monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 382
256 The Formation of Mutual Understanding in Conversation: An Embodied Approach

Authors: Haruo Okabayashi

Abstract:

The mutual understanding in conversation is very important for human relations. This study investigates the mental function of the formation of mutual understanding between two people in conversation using the embodied approach. Forty people participated in this study. They are divided into pairs randomly. Four conversation situations between two (make/listen to fun or pleasant talk, make/listen to regrettable talk) are set for four minutes each, and the finger plethysmogram (200 Hz) of each participant is measured. As a result, the attractors of the participants who reported “I did not understand my partner” show the collapsed shape, which means the fluctuation of their rhythm is too small to match their partner’s rhythm, and their cross correlation is low. The autonomic balance of both persons tends to resonate during conversation, and both LLEs tend to resonate, too. In human history, in order for human beings as weak mammals to live, they may have been with others; that is, they have brought about resonating characteristics, which is called self-organization. However, the resonant feature sometimes collapses, depending on the lifestyle that the person was formed by himself after birth. It is difficult for people who do not have a lifestyle of mutual gaze to resonate their biological signal waves with others’. These people have features such as anxiety, fatigue, and confusion tendency. Mutual understanding is thought to be formed as a result of cooperation between the features of self-organization of the persons who are talking and the lifestyle indicated by mutual gaze. Such an entanglement phenomenon is called a nonlinear relation. By this research, it is found that the formation of mutual understanding is expressed by the rhythm of a biological signal showing a nonlinear relationship.

Keywords: Embodied approach, finger plethysmogram, mutual understanding, nonlinear phenomenon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1275
255 Development of Electrospun Membranes with Defined Polyethylene Collagen and Oxide Architectures Reinforced with Medium and High Intensity Statins

Authors: S. Jaramillo, Y. Montoya, W. Agudelo, J. Bustamante

Abstract:

Cardiovascular diseases (CVD) are related to affectations of the heart and blood vessels, within these are pathologies such as coronary or peripheral heart disease, caused by the narrowing of the vessel wall (atherosclerosis), which is related to the accumulation of Low-Density Lipoproteins (LDL) in the arterial walls that leads to a progressive reduction of the lumen of the vessel and alterations in blood perfusion. Currently, the main therapeutic strategy for this type of alteration is drug treatment with statins, which inhibit the enzyme 3-hydroxy-3-methyl-glutaryl-CoA reductase (HMG-CoA reductase), responsible for modulating the rate of cholesterol production and other isoprenoids in the mevalonate pathway. This enzyme induces the expression of LDL receptors in the liver, increasing their number on the surface of liver cells, reducing the plasma concentration of cholesterol. On the other hand, when the blood vessel presents stenosis, a surgical procedure with vascular implants is indicated, which are used to restore circulation in the arterial or venous bed. Among the materials used for the development of vascular implants are Dacron® and Teflon®, which perform the function of re-waterproofing the circulatory circuit, but due to their low biocompatibility, they do not have the ability to promote remodeling and tissue regeneration processes. Based on this, the present research proposes the development of a hydrolyzed collagen and polyethylene oxide electrospun membrane reinforced with medium and high-intensity statins, so that in future research it can favor tissue remodeling processes from its microarchitecture.

Keywords: atherosclerosis, medium and high-intensity statins, microarchitecture, electrospun membrane

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 609
254 Lexical Based Method for Opinion Detection on Tripadvisor Collection

Authors: Faiza Belbachir, Thibault Schienhinski

Abstract:

The massive development of online social networks allows users to post and share their opinions on various topics. With this huge volume of opinion, it is interesting to extract and interpret these information for different domains, e.g., product and service benchmarking, politic, system of recommendation. This is why opinion detection is one of the most important research tasks. It consists on differentiating between opinion data and factual data. The difficulty of this task is to determine an approach which returns opinionated document. Generally, there are two approaches used for opinion detection i.e. Lexical based approaches and Machine Learning based approaches. In Lexical based approaches, a dictionary of sentimental words is used, words are associated with weights. The opinion score of document is derived by the occurrence of words from this dictionary. In Machine learning approaches, usually a classifier is trained using a set of annotated document containing sentiment, and features such as n-grams of words, part-of-speech tags, and logical forms. Majority of these works are based on documents text to determine opinion score but dont take into account if these texts are really correct. Thus, it is interesting to exploit other information to improve opinion detection. In our work, we will develop a new way to consider the opinion score. We introduce the notion of trust score. We determine opinionated documents but also if these opinions are really trustable information in relation with topics. For that we use lexical SentiWordNet to calculate opinion and trust scores, we compute different features about users like (numbers of their comments, numbers of their useful comments, Average useful review). After that, we combine opinion score and trust score to obtain a final score. We applied our method to detect trust opinions in TRIPADVISOR collection. Our experimental results report that the combination between opinion score and trust score improves opinion detection.

Keywords: Tripadvisor, Opinion detection, SentiWordNet, trust score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 727
253 Insights into Smoothies with High Levels of Fibre and Polyphenols: Factors Influencing Chemical, Rheological and Sensory Properties

Authors: Dongxiao Sun-Waterhouse, Shiji Nair, Reginald Wibisono, Sandhya S. Wadhwa, Carl Massarotto, Duncan I. Hedderley, Jing Zhou, Sara R. Jaeger, Virginia Corrigan

Abstract:

Attempts to add fibre and polyphenols (PPs) into popular beverages present challenges related to the properties of finished products such as smoothies. Consumer acceptability, viscosity and phenolic composition of smoothies containing high levels of fruit fibre (2.5-7.5 g per 300 mL serve) and PPs (250-750 mg per 300 mL serve) were examined. The changes in total extractable PP, vitamin C content, and colour of selected smoothies over a storage stability trial (4°C, 14 days) were compared. A set of acidic aqueous model beverages were prepared to further examine the effect of two different heat treatments on the stability and extractability of PPs. Results show that overall consumer acceptability of high fibre and PP smoothies was low, with average hedonic scores ranging from 3.9 to 6.4 (on a 1-9 scale). Flavour, texture and overall acceptability decreased as fibre and polyphenol contents increased, with fibre content exerting a stronger effect. Higher fibre content resulted in greater viscosity, with an elevated PP content increasing viscosity only slightly. The presence of fibre also aided the stability and extractability of PPs after heating. A reduction of extractable PPs, vitamin C content and colour intensity of smoothies was observed after a 14-day storage period at 4°C. Two heat treatments (75°C for 45 min or 85°C for 1 min) that are normally used for beverage production, did not cause significant reduction of total extracted PPs. It is clear that high levels of added fibre and PPs greatly influence the consumer appeal of smoothies, suggesting the need to develop novel formulation and processing methods if a satisfactory functional beverage is to be developed incorporating these ingredients.

Keywords: Apple fibre, apple and blackcurrant polyphenols, consumer acceptability, functional foods, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4301
252 An Autonomous Collaborative Forecasting System Implementation – The First Step towards Successful CPFR System

Authors: Chi-Fang Huang, Yun-Shiow Chen, Yun-Kung Chung

Abstract:

In the past decade, artificial neural networks (ANNs) have been regarded as an instrument for problem-solving and decision-making; indeed, they have already done with a substantial efficiency and effectiveness improvement in industries and businesses. In this paper, the Back-Propagation neural Networks (BPNs) will be modulated to demonstrate the performance of the collaborative forecasting (CF) function of a Collaborative Planning, Forecasting and Replenishment (CPFR®) system. CPFR functions the balance between the sufficient product supply and the necessary customer demand in a Supply and Demand Chain (SDC). Several classical standard BPN will be grouped, collaborated and exploited for the easy implementation of the proposed modular ANN framework based on the topology of a SDC. Each individual BPN is applied as a modular tool to perform the task of forecasting SKUs (Stock-Keeping Units) levels that are managed and supervised at a POS (point of sale), a wholesaler, and a manufacturer in an SDC. The proposed modular BPN-based CF system will be exemplified and experimentally verified using lots of datasets of the simulated SDC. The experimental results showed that a complex CF problem can be divided into a group of simpler sub-problems based on the single independent trading partners distributed over SDC, and its SKU forecasting accuracy was satisfied when the system forecasted values compared to the original simulated SDC data. The primary task of implementing an autonomous CF involves the study of supervised ANN learning methodology which aims at making “knowledgeable" decision for the best SKU sales plan and stocks management.

Keywords: CPFR, artificial neural networks, global logistics, supply and demand chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963
251 Analysis and Control of Camera Type Weft Straightener

Authors: Jae-Yong Lee, Gyu-Hyun Bae, Yun-Soo Chung, Dae-Sub Kim, Jae-Sung Bae

Abstract:

In general, fabric is heat-treated using a stenter machine in order to dry and fix its shape. It is important to shape before the heat treatment because it is difficult to revert back once the fabric is formed. To produce the product of right shape, camera type weft straightener has been applied recently to capture and process fabric images quickly. It is more powerful in determining the final textile quality rather than photo-sensor. Positioning in front of a stenter machine, weft straightener helps to spread fabric evenly and control the angle between warp and weft constantly as right angle by handling skew and bow rollers. To process this tricky procedure, the structural analysis should be carried out in advance, based on which, its control technology can be drawn. A structural analysis is to figure out the specific contact/slippage characteristics between fabric and roller. We already examined the applicability of camera type weft straightener to plain weave fabric and found its possibility and the specific working condition of machine and rollers. In this research, we aimed to explore another applicability of camera type weft straightener. Namely, we tried to figure out camera type weft straightener can be used for fabrics. To find out the optimum condition, we increased the number of rollers. The analysis is done by ANSYS software using Finite Element Analysis method. The control function is demonstrated by experiment. In conclusion, the structural analysis of weft straightener is done to identify a specific characteristic between roller and fabrics. The control of skew and bow roller is done to decrease the error of the angle between warp and weft. Finally, it is proved that camera type straightener can also be used for the special fabrics.

Keywords: Camera type weft straightener, structure analysis, control, skew and bow roller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1427
250 Automated Method Time Measurement System for Redesigning Dynamic Facility Layout

Authors: Salam Alzubaidi, G. Fantoni, F. Failli, M. Frosolini

Abstract:

The dynamic facility layout problem is a really critical issue in the competitive industrial market; thus, solving this problem requires robust design and effective simulation systems. The sustainable simulation requires inputting reliable and accurate data into the system. So this paper describes an automated system integrated into the real environment to measure the duration of the material handling operations, collect the data in real-time, and determine the variances between the actual and estimated time schedule of the operations in order to update the simulation software and redesign the facility layout periodically. The automated method- time measurement system collects the real data through using Radio Frequency-Identification (RFID) and Internet of Things (IoT) technologies. Hence, attaching RFID- antenna reader and RFID tags enables the system to identify the location of the objects and gathering the time data. The real duration gathered will be manipulated by calculating the moving average duration of the material handling operations, choosing the shortest material handling path, and then updating the simulation software to redesign the facility layout accommodating with the shortest/real operation schedule. The periodic simulation in real-time is more sustainable and reliable than the simulation system relying on an analysis of historical data. The case study of this methodology is in cooperation with a workshop team for producing mechanical parts. Although there are some technical limitations, this methodology is promising, and it can be significantly useful in the redesigning of the manufacturing layout.

Keywords: Dynamic facility layout problem, internet of things, method time measurement, radio frequency identification, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 577