Search results for: multi-party computation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 533

Search results for: multi-party computation

173 A Framework Based on Dempster-Shafer Theory of Evidence Algorithm for the Analysis of the TV-Viewers’ Behaviors

Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi

Abstract:

In this paper, we propose an approach of detecting the behavior of the viewers of a TV program in a non-controlled environment. The experiment we propose is based on the use of three types of connected objects (smartphone, smart watch, and a connected remote control). 23 participants were observed while watching their TV programs during three phases: before, during and after watching a TV program. Their behaviors were detected using an approach based on The Dempster Shafer Theory (DST) in two phases. The first phase is to approximate dynamically the mass functions using an approach based on the correlation coefficient. The second phase is to calculate the approximate mass functions. To approximate the mass functions, two approaches have been tested: the first approach was to divide each features data space into cells; each one has a specific probability distribution over the behaviors. The probability distributions were computed statistically (estimated by empirical distribution). The second approach was to predict the TV-viewing behaviors through the use of classifiers algorithms and add uncertainty to the prediction based on the uncertainty of the model. Results showed that mixing the fusion rule with the computation of the initial approximate mass functions using a classifier led to an overall of 96%, 95% and 96% success rate for the first, second and third TV-viewing phase respectively. The results were also compared to those found in the literature. This study aims to anticipate certain actions in order to maintain the attention of TV viewers towards the proposed TV programs with usual connected objects, taking into account the various uncertainties that can be generated.

Keywords: Iot, TV-viewing behaviors identification, automatic classification, unconstrained environment

Procedia PDF Downloads 200
172 Local Differential Privacy-Based Data-Sharing Scheme for Smart Utilities

Authors: Veniamin Boiarkin, Bruno Bogaz Zarpelão, Muttukrishnan Rajarajan

Abstract:

The manufacturing sector is a vital component of most economies, which leads to a large number of cyberattacks on organisations, whereas disruption in operation may lead to significant economic consequences. Adversaries aim to disrupt the production processes of manufacturing companies, gain financial advantages, and steal intellectual property by getting unauthorised access to sensitive data. Access to sensitive data helps organisations to enhance the production and management processes. However, the majority of the existing data-sharing mechanisms are either susceptible to different cyber attacks or heavy in terms of computation overhead. In this paper, a privacy-preserving data-sharing scheme for smart utilities is proposed. First, a customer’s privacy adjustment mechanism is proposed to make sure that end-users have control over their privacy, which is required by the latest government regulations, such as the General Data Protection Regulation. Secondly, a local differential privacy-based mechanism is proposed to ensure the privacy of the end-users by hiding real data based on the end-user preferences. The proposed scheme may be applied to different industrial control systems, whereas in this study, it is validated for energy utility use cases consisting of smart, intelligent devices. The results show that the proposed scheme may guarantee the required level of privacy with an expected relative error in utility.

Keywords: data-sharing, local differential privacy, manufacturing, privacy-preserving mechanism, smart utility

Procedia PDF Downloads 48
171 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method

Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat

Abstract:

Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.

Keywords: feature extraction, feature selection, image annotation, classification

Procedia PDF Downloads 562
170 Optimized Design, Material Selection, and Improvement of Liners, Mother Plate, and Stone Box of a Direct Charge Transfer Chute in a Sinter Plant: A Computational Approach

Authors: Anamitra Ghosh, Neeladri Paul

Abstract:

The present work aims at investigating material combinations and thereby improvising an optimized design of liner-mother plate arrangement and that of the stone box, such that it has low cost, high weldability, sufficiently capable of withstanding the increased amount of corrosive shear and bending loads, and having reduced thermal expansion coefficient at temperatures close to 1000 degrees Celsius. All the above factors have been preliminarily examined using a computational approach via ANSYS Thermo-Structural Computation, a commercial software that uses the Finite Element Method to analyze the response of simulated design specimens of liner-mother plate arrangement and the stone box, to varied bending, shear, and thermal loads as well as to determine the temperature gradients developed across various surfaces of the designs. Finally, the optimized structural designs of the liner-mother plate arrangement and that of the stone box with improved material and better structural and thermal properties are selected via trial-and-error method. The final improvised design is therefore considered to enhance the overall life and reliability of a Direct Charge Transfer Chute that transfers and segregates the hot sinter onto the cooler in a sinter plant.

Keywords: shear, bending, thermal, sinter, simulated, optimized, charge, transfer, chute, expansion, computational, corrosive, stone box, liner, mother plate, arrangement, material

Procedia PDF Downloads 83
169 Influences of Slope Inclination on the Storage Capacity and Stability of Municipal Solid Waste Landfills

Authors: Feten Chihi, Gabriella Varga

Abstract:

The world's most prevalent waste management strategy is landfills. However, it grew more difficult due to a lack of acceptable waste sites. In order to develop larger landfills and extend their lifespan, the purpose of this article is to expand the capacity of the construction by varying the slope's inclination and to examine its effect on the safety factor. The capacity change with tilt is mathematically determined. Using a new probabilistic calculation method that takes into account the heterogeneity of waste layers, the safety factor for various slope angles is examined. To assess the effect of slope variation on the overall safety of landfills, over a hundred computations were performed for each angle. It has been shown that capacity increases significantly with increasing inclination. Passing from 1:3 to 2:3 slope angles and from 1:3 to 1:2 slope angles, the volume of garbage that can be deposited increases by 40 percent and 25 percent, respectively, of the initial volume. The results of the safety factor indicate that slopes of 1:3 and 1:2 are safe when the standard method (homogenous waste) is used for computation. Using the new approaches, a slope with an inclination of 2:3 can be deemed safe, despite the fact that the calculation does not account for the safety-enhancing effect of daily cover layers. Based on the study reported in this paper, the malty layered nonhomogeneous calculating technique better characterizes the safety factor. As it more closely resembles the actual state of landfills, the employed technique allows for more flexibility in design parameters. This work represents a substantial advance in limiting both safe and economical landfills.

Keywords: landfill, municipal solid waste, slope inclination, capacity, safety factor

Procedia PDF Downloads 164
168 First Earth Size

Authors: Ibrahim M. Metwally

Abstract:

Have you ever thought that earth was not the same earth we live on? Was it bigger or smaller? Was it a great continent surrounded by huge ocean as Alfred Wegener (1912) claimed? Earth is the most amazing planet in our Milky Way galaxy and may be in the universe. It is the only deformed planet that has a variable orbit around the sun and the only planet that has water on its surface. How did earth deformation take place? What does cause earth to deform? What are the results of earth deformation? How does its orbit around the sun change? First earth size computation can be achieved only considering the quantum of iron and nickel rested into earth core. This paper introduces a new theory “Earth expansion Theory”. The principles of “Earth Expansion Theory” are leading to new approaches and concepts to interpret whole earth dynamics and its geological and environmental changes. This theory is not an attempt to unify the two divergent dominant theories of continental drift, plate tectonic theory and earth expansion theory. The new theory is unique since it has a mathematical derivation, explains all the change to and around earth in terms of geological and environmental changes, and answers all unanswered questions in other theories. This paper presents the basic of the introduced theory and discusses the mechanism of earth expansion and how it took place, the forces that made the expansion. The mechanisms of earth size change from its spherical shape with radius about 3447.6 km to an elliptic shape of major radius about 6378.1 km and minor radius of about 6356.8 km and how it took place, are introduced and discussed. This article also introduces, in a more realistic explanation the formation of oceans and seas, the preparation of river formation. It also addresses the role of iron in earth size enlargement process within the continuum mechanics framework.

Keywords: earth size, earth expansion, continuum mechanics, continental and ocean formation

Procedia PDF Downloads 429
167 Optimum Design of Hybrid (Metal-Composite) Mechanical Power Transmission System under Uncertainty by Convex Modelling

Authors: Sfiso Radebe

Abstract:

The design models dealing with flawless composite structures are in abundance, where the mechanical properties of composite structures are assumed to be known a priori. However, if the worst case scenario is assumed, where material defects combined with processing anomalies in composite structures are expected, a different solution is attained. Furthermore, if the system being designed combines in series hybrid elements, individually affected by material constant variations, it implies that a different approach needs to be taken. In the body of literature, there is a compendium of research that investigates different modes of failure affecting hybrid metal-composite structures. It covers areas pertaining to the failure of the hybrid joints, structural deformation, transverse displacement, the suppression of vibration and noise. In the present study a system employing a combination of two or more hybrid power transmitting elements will be explored for the least favourable dynamic loads as well as weight minimization, subject to uncertain material properties. Elastic constants are assumed to be uncertain-but-bounded quantities varying slightly around their nominal values where the solution is determined using convex models of uncertainty. Convex analysis of the problem leads to the computation of the least favourable solution and ultimately to a robust design. This approach contrasts with a deterministic analysis where the average values of elastic constants are employed in the calculations, neglecting the variations in the material properties.

Keywords: convex modelling, hybrid, metal-composite, robust design

Procedia PDF Downloads 188
166 A Mathematical Programming Model for Lot Sizing and Production Planning in Multi-Product Companies: A Case Study of Azar Battery Company

Authors: Farzad Jafarpour Taher, Maghsud Solimanpur

Abstract:

Production planning is one of the complex tasks in multi-product firms that produce a wide range of products. Since resources in mass production companies are limited and different products use common resources, there must be a careful plan so that firms can respond to customer needs efficiently. Azar-battery Company is a firm that provides twenty types of products for its customers. Therefore, careful planning must be performed in this company. In this research, the current conditions of Azar-battery Company were investigated to provide a mathematical programming model to determine the optimum production rate of the products in this company. The production system of this company is multi-stage, multi-product and multi-period. This system is studied in terms of a one-year planning horizon regarding the capacity of machines and warehouse space limitation. The problem has been modeled as a linear programming model with deterministic demand in which shortage is not allowed. The objective function of this model is to minimize costs (including raw materials, assembly stage, energy costs, packaging, and holding). Finally, this model has been solved by Lingo software using the branch and bound approach. Since the computation time was very long, the solver interrupted, and the obtained feasible solution was used for comparison. The proposed model's solution costs have been compared to the company’s real data. This non-optimal solution reduces the total production costs of the company by about %35.

Keywords: multi-period, multi-product production, multi-stage, production planning

Procedia PDF Downloads 61
165 Effect of Particle Aspect Ratio and Shape Factor on Air Flow inside Pulmonary Region

Authors: Pratibha, Jyoti Kori

Abstract:

Particles in industry, harvesting, coal mines, etc. may not necessarily be spherical in shape. In general, it is difficult to find perfectly spherical particle. The prediction of movement and deposition of non spherical particle in distinct airway generation is much more difficult as compared to spherical particles. Moreover, there is extensive inflexibility in deposition between ducts of a particular generation and inside every alveolar duct since particle concentrations can be much bigger than the mean acinar concentration. Consequently, a large number of particles fail to be exhaled during expiration. This study presents a mathematical model for the movement and deposition of those non-spherical particles by using particle aspect ratio and shape factor. We analyse the pulsatile behavior underneath sinusoidal wall oscillation due to periodic breathing condition through a non-Darcian porous medium or inside pulmonary region. Since the fluid is viscous and Newtonian, the generalized Navier-Stokes equation in two-dimensional coordinate system (r, z) is used with boundary-layer theory. Results are obtained for various values of Reynolds number, Womersley number, Forchsheimer number, particle aspect ratio and shape factor. Numerical computation is done by using finite difference scheme for very fine mesh in MATLAB. It is found that the overall air velocity is significantly increased by changes in aerodynamic diameter, aspect ratio, alveoli size, Reynolds number and the pulse rate; while velocity is decreased by increasing Forchheimer number.

Keywords: deposition, interstitial lung diseases, non-Darcian medium, numerical simulation, shape factor

Procedia PDF Downloads 153
164 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method

Authors: F. C. Amadi, G. C. Enyi, G. Nasr

Abstract:

Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.

Keywords: relative permeabilty, porosity, 1-D black oil simulator, capillary pressures

Procedia PDF Downloads 414
163 Hybrid Thresholding Lifting Dual Tree Complex Wavelet Transform with Wiener Filter for Quality Assurance of Medical Image

Authors: Hilal Naimi, Amelbahahouda Adamou-Mitiche, Lahcene Mitiche

Abstract:

The main problem in the area of medical imaging has been image denoising. The most defying for image denoising is to secure data carrying structures like surfaces and edges in order to achieve good visual quality. Different algorithms with different denoising performances have been proposed in previous decades. More recently, models focused on deep learning have shown a great promise to outperform all traditional approaches. However, these techniques are limited to the necessity of large sample size training and high computational costs. This research proposes a denoising approach basing on LDTCWT (Lifting Dual Tree Complex Wavelet Transform) using Hybrid Thresholding with Wiener filter to enhance the quality image. This research describes the LDTCWT as a type of lifting wavelets remodeling that produce complex coefficients by employing a dual tree of lifting wavelets filters to get its real part and imaginary part. Permits the remodel to produce approximate shift invariance, directionally selective filters and reduces the computation time (properties lacking within the classical wavelets transform). To develop this approach, a hybrid thresholding function is modeled by integrating the Wiener filter into the thresholding function.

Keywords: lifting wavelet transform, image denoising, dual tree complex wavelet transform, wavelet shrinkage, wiener filter

Procedia PDF Downloads 137
162 Computational Fluid Dynamics (CFD) Simulation Approach for Developing New Powder Dispensing Device

Authors: Revanth Rallapalli

Abstract:

Manually dispensing solids and powders can be difficult as it requires gradually pour and check the amount on the scale to be dispensed. Current systems are manual and non-continuous in nature and are user-dependent and difficult to control powder dispensation. Recurrent dosing of powdered medicines in precise amounts quickly and accurately has been an all-time challenge. Various new powder dispensing mechanisms are being designed to overcome these challenges. A battery-operated screw conveyor mechanism is being innovated to overcome the above problems faced. These inventions are numerically evaluated at the concept development level by employing Computational Fluid Dynamics (CFD) of gas-solids multiphase flow systems. CFD has been very helpful in development of such devices saving time and money by reducing the number of prototypes and testing. Furthermore, this paper describes a simulation of powder dispensation from the trocar’s end by considering the powder as secondary flow in air, is simulated by using the technique called Dense Discrete Phase Model incorporated with Kinetic Theory of Granular Flow (DDPM-KTGF). By considering the volume fraction of powder as 50%, the transportation of powder from the inlet side to trocar’s end side is done by rotation of the screw conveyor. Thus, the performance is calculated for a 1-sec time frame in an unsteady computation manner. This methodology will help designers in developing design concepts to improve the dispensation and also at the effective area within a quick turnaround time frame.

Keywords: DDPM-KTGF, gas-solids multiphase flow, screw conveyor, Unsteady

Procedia PDF Downloads 161
161 Multiparametric Optimization of Water Treatment Process for Thermal Power Plants

Authors: Balgaisha Mukanova, Natalya Glazyrina, Sergey Glazyrin

Abstract:

The formulated problem of optimization of the technological process of water treatment for thermal power plants is considered in this article. The problem is of multiparametric nature. To optimize the process, namely, reduce the amount of waste water, a new technology was developed to reuse such water. A mathematical model of the technology of wastewater reuse was developed. Optimization parameters were determined. The model consists of a material balance equation, an equation describing the kinetics of ion exchange for the non-equilibrium case and an equation for the ion exchange isotherm. The material balance equation includes a nonlinear term that depends on the kinetics of ion exchange. A direct problem of calculating the impurity concentration at the outlet of the water treatment plant was numerically solved. The direct problem was approximated by an implicit point-to-point computation difference scheme. The inverse problem was formulated as relates to determination of the parameters of the mathematical model of the water treatment plant operating in non-equilibrium conditions. The formulated inverse problem was solved. Following the results of calculation the time of start of the filter regeneration process was determined, as well as the period of regeneration process and the amount of regeneration and wash water. Multi-parameter optimization of water treatment process for thermal power plants allowed decreasing the amount of wastewater by 15%.

Keywords: direct problem, multiparametric optimization, optimization parameters, water treatment

Procedia PDF Downloads 358
160 A TgCNN-Based Surrogate Model for Subsurface Oil-Water Phase Flow under Multi-Well Conditions

Authors: Jian Li

Abstract:

The uncertainty quantification and inversion problems of subsurface oil-water phase flow usually require extensive repeated forward calculations for new runs with changed conditions. To reduce the computational time, various forms of surrogate models have been built. Related research shows that deep learning has emerged as an effective surrogate model, while most surrogate models with deep learning are purely data-driven, which always leads to poor robustness and abnormal results. To guarantee the model more consistent with the physical laws, a coupled theory-guided convolutional neural network (TgCNN) based surrogate model is built to facilitate computation efficiency under the premise of satisfactory accuracy. The model is a convolutional neural network based on multi-well reservoir simulation. The core notion of this proposed method is to bridge two separate blocks on top of an overall network. They underlie the TgCNN model in a coupled form, which reflects the coupling nature of pressure and water saturation in the two-phase flow equation. The model is driven by not only labeled data but also scientific theories, including governing equations, stochastic parameterization, boundary, and initial conditions, well conditions, and expert knowledge. The results show that the TgCNN-based surrogate model exhibits satisfactory accuracy and efficiency in subsurface oil-water phase flow under multi-well conditions.

Keywords: coupled theory-guided convolutional neural network, multi-well conditions, surrogate model, subsurface oil-water phase

Procedia PDF Downloads 61
159 FRATSAN: A New Software for Fractal Analysis of Signals

Authors: Hamidreza Namazi

Abstract:

Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.

Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure

Procedia PDF Downloads 433
158 CFD Simulation Approach for Developing New Powder Dispensing Device

Authors: Revanth Rallapalli

Abstract:

Manually dispensing powders can be difficult as it requires gradually pouring and checking the amount on the scale to be dispensed. Current systems are manual and non-continuous in nature and are user-dependent and difficult to control powder dispensation. Recurrent dosing of powdered medicines in precise amounts quickly and accurately has been an all-time challenge. Various new powder dispensing mechanisms are being designed to overcome these challenges. A battery-operated screw conveyor mechanism is being innovated to overcome the above problems faced. These inventions are numerically evaluated at the concept development level by employing Computational Fluid Dynamics (CFD) of gas-solids multiphase flow systems. CFD has been very helpful in the development of such devices saving time and money by reducing the number of prototypes and testing. This paper describes a simulation of powder dispensation from the trocar’s end by considering the powder as secondary flow in the air, is simulated by using the technique called Dense Discrete Phase Model incorporated with Kinetic Theory of Granular Flow (DDPM-KTGF). By considering the volume fraction of powder as 50%, the transportation of powder from the inlet side to the trocar’s end side is done by rotation of the screw conveyor. The performance is calculated for a 1-sec time frame in an unsteady computation manner. This methodology will help designers in developing design concepts to improve the dispensation and the effective area within a quick turnaround time frame.

Keywords: multiphase flow, screw conveyor, transient, dense discrete phase model (DDPM), kinetic theory of granular flow (KTGF)

Procedia PDF Downloads 122
157 Simulation of Dynamic Behavior of Seismic Isolators Using a Parallel Elasto-Plastic Model

Authors: Nicolò Vaiana, Giorgio Serino

Abstract:

In this paper, a one-dimensional (1d) Parallel Elasto- Plastic Model (PEPM), able to simulate the uniaxial dynamic behavior of seismic isolators having a continuously decreasing tangent stiffness with increasing displacement, is presented. The parallel modeling concept is applied to discretize the continuously decreasing tangent stiffness function, thus allowing to simulate the dynamic behavior of seismic isolation bearings by putting linear elastic and nonlinear elastic-perfectly plastic elements in parallel. The mathematical model has been validated by comparing the experimental force-displacement hysteresis loops, obtained testing a helical wire rope isolator and a recycled rubber-fiber reinforced bearing, with those predicted numerically. Good agreement between the simulated and experimental results shows that the proposed model can be an effective numerical tool to predict the forcedisplacement relationship of seismic isolators within relatively large displacements. Compared to the widely used Bouc-Wen model, the proposed one allows to avoid the numerical solution of a first order ordinary nonlinear differential equation for each time step of a nonlinear time history analysis, thus reducing the computation effort, and requires the evaluation of only three model parameters from experimental tests, namely the initial tangent stiffness, the asymptotic tangent stiffness, and a parameter defining the transition from the initial to the asymptotic tangent stiffness.

Keywords: base isolation, earthquake engineering, parallel elasto-plastic model, seismic isolators, softening hysteresis loops

Procedia PDF Downloads 257
156 Virtual Process Hazard Analysis (Pha) Of a Nuclear Power Plant (Npp) Using Failure Mode and Effects Analysis (Fmea) Technique

Authors: Lormaine Anne A. Branzuela, Elysa V. Largo, Monet Concepcion M. Detras, Neil C. Concibido

Abstract:

The electricity demand is still increasing, and currently, the Philippine government is investigating the feasibility of operating the Bataan Nuclear Power Plant (BNPP) to address the country’s energy problem. However, the lack of process safety studies on BNPP focused on the effects of hazardous substances on the integrity of the structure, equipment, and other components, have made the plant operationalization questionable to the public. The three major nuclear power plant incidents – TMI-2, Chernobyl, and Fukushima – have made many people hesitant to include nuclear energy in the energy matrix. This study focused on the safety evaluation of possible operations of a nuclear power plant installed with a Pressurized Water Reactor (PWR), which is similar to BNPP. Failure Mode and Effects Analysis (FMEA) is one of the Process Hazard Analysis (PHA) techniques used for the identification of equipment failure modes and minimizing its consequences. Using the FMEA technique, this study was able to recognize 116 different failure modes in total. Upon computation and ranking of the risk priority number (RPN) and criticality rating (CR), it showed that failure of the reactor coolant pump due to earthquakes is the most critical failure mode. This hazard scenario could lead to a nuclear meltdown and radioactive release, as identified by the FMEA team. Safeguards and recommended risk reduction strategies to lower the RPN and CR were identified such that the effects are minimized, the likelihood of occurrence is reduced, and failure detection is improved.

Keywords: PHA, FMEA, nuclear power plant, bataan nuclear power plant

Procedia PDF Downloads 96
155 General Formula for Water Surface Profile over Side Weir in the Combined, Trapezoidal and Exponential, Channels

Authors: Abdulrahman Abdulrahman

Abstract:

A side weir is a hydraulic structure set into the side of a channel. This structure is used for water level control in channels, to divert flow from a main channel into a side channel when the water level in the main channel exceeds a specific limit and as storm overflows from urban sewerage system. Computation of water surface over the side weirs is essential to determine the flow rate of the side weir. Analytical solutions for water surface profile along rectangular side weir are available only for the special cases of rectangular and trapezoidal channels considering constant specific energy. In this paper, a rectangular side weir located in a combined (trapezoidal with exponential) channel was considered. Expanding binominal series of integer and fraction powers and the using of reduction formula of cosine function integrals, a general analytical formula was obtained for water surface profile along a side weir in a combined (trapezoidal with exponential) channel. Since triangular, rectangular, trapezoidal and parabolic cross-sections are special cases of the combined cross section, the derived formula, is applicable to triangular, rectangular, trapezoidal cross-sections as analytical solution and semi-analytical solution to parabolic cross-section with maximum relative error smaller than 0.76%. The proposed solution should be a useful engineering tool for the evaluation and design of side weirs in open channel.

Keywords: analytical solution, combined channel, exponential channel, side weirs, trapezoidal channel, water surface profile

Procedia PDF Downloads 214
154 Methodologies for Crack Initiation in Welded Joints Applied to Inspection Planning

Authors: Guang Zou, Kian Banisoleiman, Arturo González

Abstract:

Crack initiation and propagation threatens structural integrity of welded joints and normally inspections are assigned based on crack propagation models. However, the approach based on crack propagation models may not be applicable for some high-quality welded joints, because the initial flaws in them may be so small that it may take long time for the flaws to develop into a detectable size. This raises a concern regarding the inspection planning of high-quality welded joins, as there is no generally acceptable approach for modeling the whole fatigue process that includes the crack initiation period. In order to address the issue, this paper reviews treatment methods for crack initiation period and initial crack size in crack propagation models applied to inspection planning. Generally, there are four approaches, by: 1) Neglecting the crack initiation period and fitting a probabilistic distribution for initial crack size based on statistical data; 2) Extrapolating the crack propagation stage to a very small fictitious initial crack size, so that the whole fatigue process can be modeled by crack propagation models; 3) Assuming a fixed detectable initial crack size and fitting a probabilistic distribution for crack initiation time based on specimen tests; and, 4) Modeling the crack initiation and propagation stage separately using small crack growth theories and Paris law or similar models. The conclusion is that in view of trade-off between accuracy and computation efforts, calibration of a small fictitious initial crack size to S-N curves is the most efficient approach.

Keywords: crack initiation, fatigue reliability, inspection planning, welded joints

Procedia PDF Downloads 332
153 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis

Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha

Abstract:

Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.

Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier

Procedia PDF Downloads 449
152 Labour Productivity Measurement and Control Standards for Hotels

Authors: Kristine Joy Simpao

Abstract:

Improving labour productivity is one of the most enthralling and challenging aspects of managing hotels and restaurant business. The demand to secure countless productivity became an increasingly pivotal role of managers to survive and sustain the business. Besides making business profitable, they are in the doom to make every resource to become productive and effective towards achieving company goal while maximizing the value of organization. This paper examines what productivity means to the services industry, in particular, to the hotel industry. This is underpinned by an investigation of the extent of practice of respondent hotels to the labour productivity aspect in the areas of materials management, human resource management and leadership management and in a way, computing the labour productivity ratios using the hotel simple ratios of productivity in order to find a suitable measurement and control standards for hotels with SBMA, Olongapo City as the locale of the study. The finding shows that hotels labour productivity ratings are not perfect with some practices that are far below particularly on strategic and operational decisions in improving performance and productivity of its human resources. It further proves of the no significant difference ratings among the respondent’s type in all areas which indicated that they are having similar perception of the weak implementation of some of the indicators in the labour productivity practices. Furthermore, the results in the computation of labour productivity efficiency ratios resulted relationship of employees versus labour productivity practices are inversely proportional. This study provides a potential measurement and control standards for the enhancement of hotels labour productivity. These standards should also contain labour productivity customized for standard hotels in Subic Bay Freeport Zone to assist hotel owners in increasing the labour productivity while meeting company goals and objectives effectively.

Keywords: labour productivity, hotel, measurement and control, standards, efficiency ratios, practices

Procedia PDF Downloads 292
151 Automation of Embodied Energy Calculations for Buildings through Building Information Modelling

Authors: Ahmad Odeh

Abstract:

Researchers are currently more concerned about the calculations of energy at the operational stage, mainly due to its larger environmental impact, but the fact remains, embodied energies represent a substantial contributor unaccounted for in the overall energy computation method. The calculation of materials’ embodied energy during the construction stage is complicated. This is due to the various factors involved. The equipment used, fuel needed, and electricity required for each type of materials varies with location and thus the embodied energy will differ for each project. Moreover, the method used in manufacturing, transporting and putting in place will have significant influence on the materials’ embodied energy. This anomaly has made it difficult to calculate or even bench mark the usage of such energies. This paper presents a model aimed at calculating embodied energies based on such variabilities. It presents a systematic approach that uses an efficient method of calculation to provide a new insight for the selection of construction materials. The model is developed in a BIM environment. The quantification of materials’ energy is determined over the three main stages of their lifecycle: manufacturing, transporting and placing. The model uses three major databases each of which contains set of the construction materials that are most commonly used in building projects. The first dataset holds information about the energy required to manufacture any type of materials, the second includes information about the energy required for transporting the materials while the third stores information about the energy required by machinery to place the materials in their intended locations. Through geospatial data analysis, the model automatically calculates the distances between the suppliers and construction sites and then uses dataset information for energy computations. The computational sum of all the energies is automatically calculated and then the model provides designers with a list of usable equipment along with the associated embodied energies.

Keywords: BIM, lifecycle energy assessment, building automation, energy conservation

Procedia PDF Downloads 173
150 Utilization of “Adlai” (Coix lacryma-jobi L) Flour as Wheat Flour Extender in Selected Baked Products in the Philippines

Authors: Rolando B. Llave Jr.

Abstract:

In many countries, wheat flour is used an essential component in production/preparation of bread and other baked products considered to have a significant role in man’s diet. Partial replacement of wheat flour with other flours (composite flour) in preparation of the said products is seen as a solution to the scarcity of wheat flour (in non-wheat producing countries), and improved nourishment. In composite flour, other flours may come from cereals, legumes, root crops, and those that are rich in starch. Many countries utilize whatever is locally available. “Adlai” or Job’s tears is a tall cereal plant that belongs to the same family of grass as wheat, rice, and corn. In some countries, it is used as an ingredient in producing many dishes and alcoholic and non-alcoholic beverages. As part of the Food Staple Self-Sufficiency Program (FSSP) of the Department of Agriculture (DA) in the Philippines, “adlai” is being promoted as alternative food source for the Filipinos. In this study, the grits coming from the seeds of “adlai” were turned into flour. The resulting flour was then used as partial replacement for wheat flour in selected baked products namely “pan de sal” (salt bread), cupcakes and cookies. The supplementation of “adlai” flour ranged 20%-45% with 20%-35% for “pan de sal”; 30%-45% for cupcakes; and 25% - 40% for cookies. The study was composed of four (4) phases. Phase I was product formulation studies. Phase II included the acceptability test/sensory evaluation of the baked products where the statistical analysis of the data gathered followed. Phase III was the computation of the theoretical protein content of the most acceptable “pan de sal”, cupcake and cookie, and lastly, in Phase IV, cost benefit was analyzed, specifically in terms of the direct material cost.

Keywords: “adlai”, composite flour, supplementation, sensory evaluation

Procedia PDF Downloads 826
149 Hydrological Response of the Glacierised Catchment: Himalayan Perspective

Authors: Sonu Khanal, Mandira Shrestha

Abstract:

Snow and Glaciers are the largest dependable reserved sources of water for the river system originating from the Himalayas so an accurate estimate of the volume of water contained in the snowpack and the rate of release of water from snow and glaciers are, therefore, needed for efficient management of the water resources. This research assess the fusion of energy exchanges between the snowpack, air above and soil below according to mass and energy balance which makes it apposite than the models using simple temperature index for the snow and glacier melt computation. UEBGrid a Distributed energy based model is used to calculate the melt which is then routed by Geo-SFM. The model robustness is maintained by incorporating the albedo generated from the Landsat-7 ETM images on a seasonal basis for the year 2002-2003 and substrate map derived from TM. The Substrate file includes predominantly the 4 major thematic layers viz Snow, clean ice, Glaciers and Barren land. This approach makes use of CPC RFE-2 and MERRA gridded data sets as the source of precipitation and climatic variables. The subsequent model run for the year between 2002-2008 shows a total annual melt of 17.15 meter is generate from the Marshyangdi Basin of which 71% is contributed by the glaciers , 18% by the rain and rest being from the snow melt. The albedo file is decisive in governing the melt dynamics as 30% increase in the generated surface albedo results in the 10% decrease in the simulated discharge. The melt routed with the land cover and soil variables using Geo-SFM shows Nash-Sutcliffe Efficiency of 0.60 with observed discharge for the study period.

Keywords: Glacier, Glacier melt, Snowmelt, Energy balance

Procedia PDF Downloads 434
148 An Amended Method for Assessment of Hypertrophic Scars Viscoelastic Parameters

Authors: Iveta Bryjova

Abstract:

Recording of viscoelastic strain-vs-time curves with the aid of the suction method and a follow-up analysis, resulting into evaluation of standard viscoelastic parameters, is a significant technique for non-invasive contact diagnostics of mechanical properties of skin and assessment of its conditions, particularly in acute burns, hypertrophic scarring (the most common complication of burn trauma) and reconstructive surgery. For elimination of the skin thickness contribution, usable viscoelastic parameters deduced from the strain-vs-time curves are restricted to the relative ones (i.e. those expressed as a ratio of two dimensional parameters), like grosselasticity, net-elasticity, biological elasticity or Qu’s area parameters, in literature and practice conventionally referred to as R2, R5, R6, R7, Q1, Q2, and Q3. With the exception of parameters R2 and Q1, the remaining ones substantially depend on the position of inflection point separating the elastic linear and viscoelastic segments of the strain-vs-time curve. The standard algorithm implemented in commercially available devices relies heavily on the experimental fact that the inflection time comes about 0.1 sec after the suction switch-on/off, which depreciates credibility of parameters thus obtained. Although the Qu’s US 7,556,605 patent suggests a method of improving the precision of the inflection determination, there is still room for nonnegligible improving. In this contribution, a novel method of inflection point determination utilizing the advantageous properties of the Savitzky–Golay filtering is presented. The method allows computation of derivatives of smoothed strain-vs-time curve, more exact location of inflection and consequently more reliable values of aforementioned viscoelastic parameters. An improved applicability of the five inflection-dependent relative viscoelastic parameters is demonstrated by recasting a former study under the new method, and by comparing its results with those provided by the methods that have been used so far.

Keywords: Savitzky–Golay filter, scarring, skin, viscoelasticity

Procedia PDF Downloads 275
147 Effect of Built in Polarization on Thermal Properties of InGaN/GaN Heterostructures

Authors: Bijay Kumar Sahoo

Abstract:

An important feature of InₓGa₁-ₓN/GaN heterostructures is strong built-in polarization (BIP) electric field at the hetero-interface due to spontaneous (sp) and piezoelectric (pz) polarizations. The intensity of this electric field reaches several MV/cm. This field has profound impact on optical, electrical and thermal properties. In this work, the effect of BIP field on thermal conductivity of InₓGa₁-ₓN/GaN heterostructure has been investigated theoretically. The interaction between the elastic strain and built in electric field induces additional electric polarization. This additional polarization contributes to the elastic constant of InₓGa₁-ₓN alloy. This in turn modifies material parameters of InₓGa₁-ₓN. The BIP mechanism enhances elastic constant, phonon velocity and Debye temperature and their bowing constants in InₓGa₁-ₓN alloy. These enhanced thermal parameters increase phonon mean free path which boost thermal conduction process. The thermal conductivity (k) of InxGa1-xN alloy has been estimated for x=0, 0.1, 0.3 and 0.9. Computation finds that irrespective of In content, the room temperature k of InₓGa₁-ₓN/GaN heterostructure is enhanced by BIP mechanism. Our analysis shows that at a certain temperature both k with and without BIP show crossover. Below this temperature k with BIP field is lower than k without BIP; however, above this temperature k with BIP field is significantly contributed by BIP mechanism leading to k with BIP field become higher than k without BIP field. The crossover temperature is primary pyroelectric transition temperature. The pyroelectric transition temperature of InₓGa₁-ₓN alloy has been predicted for different x. This signature of pyroelectric nature suggests that thermal conductivity can reveal pyroelectricity in InₓGa₁-ₓN alloy. The composition dependent room temperature k for x=0.1 and 0.3 are in line with prior experimental studies. The result can be used to minimize the self-heating effect in InₓGa₁-ₓN/GaN heterostructures.

Keywords: built-in polarization, phonon relaxation time, thermal properties of InₓGa₁-ₓN /GaN heterostructure, self-heating

Procedia PDF Downloads 384
146 Vulnerability of People to Climate Change: Influence of Methods and Computation Approaches on Assessment Outcomes

Authors: Adandé Belarmain Fandohan

Abstract:

Climate change has become a major concern globally, particularly in rural communities that have to find rapid coping solutions. Several vulnerability assessment approaches have been developed in the last decades. This comes along with a higher risk for different methods to result in different conclusions, thereby making comparisons difficult and decision-making non-consistent across areas. The effect of methods and computational approaches on estimates of people’s vulnerability was assessed using data collected from the Gambia. Twenty-four indicators reflecting vulnerability components: (exposure, sensitivity, and adaptive capacity) were selected for this purpose. Data were collected through household surveys and key informant interviews. One hundred and fifteen respondents were surveyed across six communities and two administrative districts. Results were compared over three computational approaches: the maximum value transformation normalization, the z-score transformation normalization, and simple averaging. Regardless of the approaches used, communities that have high exposure to climate change and extreme events were the most vulnerable. Furthermore, the vulnerability was strongly related to the socio-economic characteristics of farmers. The survey evidenced variability in vulnerability among communities and administrative districts. Comparing output across approaches, overall, people in the study area were found to be highly vulnerable using the simple average and maximum value transformation, whereas they were only moderately vulnerable using the z-score transformation approach. It is suggested that assessment approach-induced discrepancies be accounted for in international debates to harmonize/standardize assessment approaches to the end of making outputs comparable across regions. This will also likely increase the relevance of decision-making for adaptation policies.

Keywords: maximum value transformation, simple averaging, vulnerability assessment, West Africa, z-score transformation

Procedia PDF Downloads 80
145 Window Analysis and Malmquist Index for Assessing Efficiency and Productivity Growth in a Pharmaceutical Industry

Authors: Abbas Al-Refaie, Ruba Najdawi, Nour Bata, Mohammad D. AL-Tahat

Abstract:

The pharmaceutical industry is an important component of health care systems throughout the world. Measurement of a production unit-performance is crucial in determining whether it has achieved its objectives or not. This paper applies data envelopment (DEA) window analysis to assess the efficiencies of two packaging lines; Allfill (new) and DP6, in the Penicillin plant in a Jordanian Medical Company in 2010. The CCR and BCC models are used to estimate the technical efficiency, pure technical efficiency, and scale efficiency. Further, the Malmquist productivity index is computed to measure then employed to assess productivity growth relative to a reference technology. Two primary issues are addressed in computation of Malmquist indices of productivity growth. The first issue is the measurement of productivity change over the period, while the second is to decompose changes in productivity into what are generally referred to as a ‘catching-up’ effect (efficiency change) and a ‘frontier shift’ effect (technological change). Results showed that DP6 line outperforms the Allfill in technical and pure technical efficiency. However, the Allfill line outperforms DP6 line in scale efficiency. The obtained efficiency values can guide production managers in taking effective decisions related to operation, management, and plant size. Moreover, both machines exhibit a clear fluctuations in technological change, which is the main reason for the positive total factor productivity change. That is, installing a new Allfill production line can be of great benefit to increasing productivity. In conclusions, the DEA window analysis combined with the Malmquist index are supportive measures in assessing efficiency and productivity in pharmaceutical industry.

Keywords: window analysis, malmquist index, efficiency, productivity

Procedia PDF Downloads 587
144 Microwave Single Photon Source Using Landau-Zener Transitions

Authors: Siddhi Khaire, Samarth Hawaldar, Baladitya Suri

Abstract:

As efforts towards quantum communication advance, the need for single photon sources becomes imminent. Due to the extremely low energy of a single microwave photon, efforts to build single photon sources and detectors in the microwave range are relatively recent. We plan to use a Cooper Pair Box (CPB) that has a ‘sweet-spot’ where the two energy levels have minimal separation. Moreover, these qubits have fairly large anharmonicity making them close to ideal two-level systems. If the external gate voltage of these qubits is varied rapidly while passing through the sweet-spot, due to Landau-Zener effect, the qubit can be excited almost deterministically. The rapid change of the gate control voltage through the sweet spot induces a non-adiabatic population transfer from the ground to the excited state. The qubit eventually decays into the emission line emitting a single photon. The advantage of this setup is that the qubit can be excited without any coherent microwave excitation, thereby effectively increasing the usable source efficiency due to the absence of control pulse microwave photons. Since the probability of a Landau-Zener transition can be made almost close to unity by the appropriate design of parameters, this source behaves as an on-demand source of single microwave photons. The large anharmonicity of the CPB also ensures that only one excited state is involved in the transition and multiple photon output is highly improbable. Such a system has so far not been implemented and would find many applications in the areas of quantum optics, quantum computation as well as quantum communication.

Keywords: quantum computing, quantum communication, quantum optics, superconducting qubits, flux qubit, charge qubit, microwave single photon source, quantum information processing

Procedia PDF Downloads 64