Search results for: multiagent system
769 Evolutionary Training of Hybrid Systems of Recurrent Neural Networks and Hidden Markov Models
Authors: Rohitash Chandra, Christian W. Omlin
Abstract:
We present a hybrid architecture of recurrent neural networks (RNNs) inspired by hidden Markov models (HMMs). We train the hybrid architecture using genetic algorithms to learn and represent dynamical systems. We train the hybrid architecture on a set of deterministic finite-state automata strings and observe the generalization performance of the hybrid architecture when presented with a new set of strings which were not present in the training data set. In this way, we show that the hybrid system of HMM and RNN can learn and represent deterministic finite-state automata. We ran experiments with different sets of population sizes in the genetic algorithm; we also ran experiments to find out which weight initializations were best for training the hybrid architecture. The results show that the hybrid architecture of recurrent neural networks inspired by hidden Markov models can train and represent dynamical systems. The best training and generalization performance is achieved when the hybrid architecture is initialized with random real weight values of range -15 to 15.Keywords: Deterministic finite-state automata, genetic algorithm, hidden Markov models, hybrid systems and recurrent neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890768 An Investigation on the Accuracy of Nonlinear Static Procedures for Seismic Evaluation of Buckling-restrained Braced Frames
Authors: An Hong Nguyen, Chatpan Chintanapakdee, Toshiro Hayashikawa
Abstract:
Presented herein is an assessment of current nonlinear static procedures (NSPs) for seismic evaluation of bucklingrestrained braced frames (BRBFs) which have become a favorable lateral-force resisting system for earthquake resistant buildings. The bias and accuracy of modal, improved modal pushover analysis (MPA, IMPA) and mass proportional pushover (MPP) procedures are comparatively investigated when they are applied to BRBF buildings subjected to two sets of strong ground motions. The assessment is based on a comparison of seismic displacement demands such as target roof displacements, peak floor/roof displacements and inter-story drifts. The NSP estimates are compared to 'exact' results from nonlinear response history analysis (NLRHA). The response statistics presented show that the MPP procedure tends to significantly overestimate seismic demands of lower stories of tall buildings considered in this study while MPA and IMPA procedures provide reasonably accurate results in estimating maximum inter-story drift over all stories of studied BRBF systems.Keywords: Buckling-restrained braced frames, nonlinearresponse history analysis, nonlinear static procedure, seismicdemands.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959767 Face Recognition Using Double Dimension Reduction
Authors: M. A Anjum, M. Y. Javed, A. Basit
Abstract:
In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.
Keywords: Biometrics, DCT, Face Recognition, Feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492766 Motion Analysis for Duplicate Frame Removal in Wireless Capsule Endoscope Video
Authors: Min Kook Choi, Hyun Gyu Lee, Ryan You, Byeong-Seok Shin, Sang-Chul Lee
Abstract:
Wireless capsule Endoscopy (WCE) has rapidly shown its wide applications in medical domain last ten years thanks to its noninvasiveness for patients and support for thorough inspection through a patient-s entire digestive system including small intestine. However, one of the main barriers to efficient clinical inspection procedure is that it requires large amount of effort for clinicians to inspect huge data collected during the examination, i.e., over 55,000 frames in video. In this paper, we propose a method to compute meaningful motion changes of WCE by analyzing the obtained video frames based on regional optical flow estimations. The computed motion vectors are used to remove duplicate video frames caused by WCE-s imaging nature, such as repetitive forward-backward motions from peristaltic movements. The motion vectors are derived by calculating directional component vectors in four local regions. Our experiments are performed on small intestine area, which is of main interest to clinical experts when using WCEs, and our experimental results show significant frame reductions comparing with a simple frame-to-frame similarity-based image reduction method.Keywords: Wireless capsule endoscopy, optical flow, duplicated image, duplicated frame.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693765 Applying the Regression Technique for Prediction of the Acute Heart Attack
Authors: Paria Soleimani, Arezoo Neshati
Abstract:
Myocardial infarction is one of the leading causes of death in the world. Some of these deaths occur even before the patient reaches the hospital. Myocardial infarction occurs as a result of impaired blood supply. Because the most of these deaths are due to coronary artery disease, hence the awareness of the warning signs of a heart attack is essential. Some heart attacks are sudden and intense, but most of them start slowly, with mild pain or discomfort, then early detection and successful treatment of these symptoms is vital to save them. Therefore, importance and usefulness of a system designing to assist physicians in early diagnosis of the acute heart attacks is obvious. The main purpose of this study would be to enable patients to become better informed about their condition and to encourage them to seek professional care at an earlier stage in the appropriate situations. For this purpose, the data were collected on 711 heart patients in Iran hospitals. 28 attributes of clinical factors can be reported by patients; were studied. Three logistic regression models were made on the basis of the 28 features to predict the risk of heart attacks. The best logistic regression model in terms of performance had a C-index of 0.955 and with an accuracy of 94.9%. The variables, severe chest pain, back pain, cold sweats, shortness of breath, nausea and vomiting, were selected as the main features.
Keywords: Coronary heart disease, acute heart attacks, prediction, logistic regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2427764 Microservices-Based Provisioning and Control of Network Services for Heterogeneous Networks
Authors: Shameemraj M. Nadaf, Sipra Behera, Hemant K. Rath, Garima Mishra, Raja Mukhopadhyay, Sumanta Patro
Abstract:
Microservices architecture has been widely embraced for rapid, frequent, and reliable delivery of complex applications. It enables organizations to evolve their technology stack in various domains. Today, the networking domain is flooded with plethora of devices and software solutions which address different functionalities ranging from elementary operations, viz., switching, routing, firewall etc., to complex analytics and insights based intelligent services. In this paper, we attempt to bring in the microservices based approach for agile and adaptive delivery of network services for any underlying networking technology. We discuss the life cycle management of each individual microservice and a distributed control approach with emphasis for dynamic provisioning, management, and orchestration in an automated fashion which can provide seamless operations in large scale networks. We have conducted validations of the system in lab testbed comprising of Traditional/Legacy and Software Defined Wireless Local Area networks.
Keywords: Microservices architecture, software defined wireless networks, traditional wireless networks, automation, orchestration, intelligent networks, network analytics, seamless management, single pane control, fine-grain control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895763 FEM Simulation of Triple Diffusive Magnetohydrodynamics Effect of Nanofluid Flow over a Nonlinear Stretching Sheet
Authors: Rangoli Goyal, Rama Bhargava
Abstract:
The triple diffusive boundary layer flow of nanofluid under the action of constant magnetic field over a non-linear stretching sheet has been investigated numerically. The model includes the effect of Brownian motion, thermophoresis, and cross-diffusion; slip mechanisms which are primarily responsible for the enhancement of the convective features of nanofluid. The governing partial differential equations are transformed into a system of ordinary differential equations (by using group theory transformations) and solved numerically by using variational finite element method. The effects of various controlling parameters, such as the magnetic influence number, thermophoresis parameter, Brownian motion parameter, modified Dufour parameter, and Dufour solutal Lewis number, on the fluid flow as well as on heat and mass transfer coefficients (both of solute and nanofluid) are presented graphically and discussed quantitatively. The present study has industrial applications in aerodynamic extrusion of plastic sheets, coating and suspensions, melt spinning, hot rolling, wire drawing, glass-fibre production, and manufacture of polymer and rubber sheets, where the quality of the desired product depends on the stretching rate as well as external field including magnetic effects.Keywords: FEM, Thermophoresis, Diffusiophoresis, Brownian motion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451762 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation
Authors: Ke He, Wumaier Parezhati, Haruka Yamashita
Abstract:
Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.Keywords: Doc2Vec, marketing, online marketplace, recommendation system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 468761 Optimization of Molasses Desugarization Process Using Steffen Method in Sugar Beet Factories
Authors: Simin Asadollahi, Mohammad Hossein Haddad Khodaparast
Abstract:
Molasses is one of the most important by-products in sugar industry, which contains a large amount of sucrose. The routine way to separate the sucrose from molasses is using steffen method. Whereas this method is very usual in sugar factories, the aim of this research is optimization of this method. Mentioned optimization depends to three factors of reactor alkality, reactor temperature and diluted molasses brix. Accordingly, three different stages must be done:
- Construction of a pilot plant similar to actual steffen system in sugar factories
- Experimenting using the pilot plant
- Laboratory analysis
These experiences included 27 treatments in three replications. In each replication, brix, polarization and purity characters in Saccharate syrup and hot and cold waste were measured. The results showed that diluted molasses brix, reactor alkality and reactor temperature had many significant effects on Saccharate purity and efficiency of molasses desugarization. This research was performed in "randomize complete design" form & was analyzed with "duncan multiple range test". The significant difference in the level of α = 5% is observed between the treatments. The results indicated that the optimal conditions for molasses desugarization by steffen method are: diluted molasses brix= 10, reactor alkality= 10 and reactor temperature=8˚C.
Keywords: Molasses desugarization, Saccharate purity, Steffen process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3006760 Flutter Analysis of Slender Beams with Variable Cross Sections Based on Integral Equation Formulation
Authors: Z. El Felsoufi, L. Azrar
Abstract:
This paper studies a mathematical model based on the integral equations for dynamic analyzes numerical investigations of a non-uniform or multi-material composite beam. The beam is subjected to a sub-tangential follower force and elastic foundation. The boundary conditions are represented by generalized parameterized fixations by the linear and rotary springs. A mathematical formula based on Euler-Bernoulli beam theory is presented for beams with variable cross-sections. The non-uniform section introduces non-uniformity in the rigidity and inertia of beams and consequently, more complicated equilibrium who governs the equation. Using the boundary element method and radial basis functions, the equation of motion is reduced to an algebro-differential system related to internal and boundary unknowns. A generalized formula for the deflection, the slope, the moment and the shear force are presented. The free vibration of non-uniform loaded beams is formulated in a compact matrix form and all needed matrices are explicitly given. The dynamic stability analysis of slender beam is illustrated numerically based on the coalescence criterion. A realistic case related to an industrial chimney is investigated.
Keywords: Chimney, BEM and integral equation formulation, non uniform cross section, vibration and Flutter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621759 Reduction of Power Losses in Distribution Systems
Authors: Y. Al-Mahroqi, I.A. Metwally, A. Al-Hinai, A. Al-Badi
Abstract:
Losses reduction initiatives in distribution systems have been activated due to the increasing cost of supplying electricity, the shortage in fuel with ever-increasing cost to produce more power, and the global warming concerns. These initiatives have been introduced to the utilities in shape of incentives and penalties. Recently, the electricity distribution companies in Oman have been incentivized to reduce the distribution technical and non-technical losses with an equal annual reduction rate for 6 years. In this paper, different techniques for losses reduction in Mazoon Electricity Company (MZEC) are addressed. In this company, high numbers of substation and feeders were found to be non-compliant with the Distribution System Security Standard (DSSS). Therefore, 33 projects have been suggested to bring non-complying 29 substations and 28 feeders to meet the planed criteria and to comply with the DSSS. The largest part of MZEC-s network (South Batinah region) was modeled by ETAP software package. The model has been extended to implement the proposed projects and to examine their effects on losses reduction. Simulation results have shown that the implementation of these projects leads to a significant improvement in voltage profile, and reduction in the active and the reactive power losses. Finally, the economical analysis has revealed that the implementation of the proposed projects in MZEC leads to an annual saving of about US$ 5 million.Keywords: Losses Reduction, Technical Losses, Non-Technical Losses, Cost Analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9370758 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach
Authors: Mukesh Kumar Shah, Tushar Gupta
Abstract:
An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.
Keywords: Economic dispatch, Gaussian selection operator, prohibited operating zones, ramp rate limits, upgraded cuckoo search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 685757 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-
Authors: Nieto Bernal Wilson, Carmona Suarez Edgar
Abstract:
The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects. Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.
Keywords: Data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478756 Numerical Simulation for a Shallow Braced Excavation of Campus Building
Authors: Sao-Jeng Chao, Wen-Cheng Chen, Wei-Humg Lu
Abstract:
In order to prevent encountering unpredictable factors, geotechnical engineers always conduct numerical analysis for braced excavation design. Simulation work in advance can predict the response of subsequent excavation and thus will be designed to increase the security coefficient of construction. The parameters that are considered include geological conditions, soil properties, soil distributions, loading types, and the analysis and design methods. National Ilan University is located on the LanYang plain, mainly deposited by clayey soil and loose sand, and thus is vulnerable to external influence displacement. National Ilan University experienced a construction of braced excavation with a complete program of monitoring excavation. This study takes advantage of a one-dimensional finite element method RIDO to simulate the excavation process. The predicted results from numerical simulation analysis are compared with the monitored results of construction to explore the differences between them. Numerical simulation analysis of the excavation process can be used to analyze retaining structures for the purpose of understanding the relationship between the displacement and supporting system. The resulting deformation and stress distribution from the braced excavation cab then be understand in advance. The problems can be prevented prior to the construction process, and thus acquire all the affected important factors during design and construction.
Keywords: Excavation, numerical simulation, rido, retaining structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918755 Time-Domain Analysis Approaches of Soil-Structure Interaction: A Comparative Study
Authors: Abdelrahman Taha, Niloofar Malekghaini, Hamed Ebrahimian, Ramin Motamed
Abstract:
This paper compares the substructure and direct approaches for soil-structure interaction (SSI) analysis in the time domain. In the substructure approach, the soil domain is replaced by a set of springs and dashpots, also referred to as the impedance function, derived through the study of the behavior of a massless rigid foundation. The impedance function is inherently frequency dependent, i.e., it varies as a function of the frequency content of the structural response. To use the frequency-dependent impedance function for time-domain SSI analysis, the impedance function is approximated at the fundamental frequency of the coupled soil-structure system. To explore the potential limitations of the substructure modeling process, a two-dimensional (2D) reinforced concrete frame structure is modeled and analyzed using the direct and substructure approaches. The results show discrepancy between the simulated responses of the direct and substructure models. It is concluded that the main source of discrepancy is likely attributed to the way the impedance functions are calculated, i.e., assuming a massless rigid foundation without considering the presence of the superstructure. Hence, a refined impedance function, considering the presence of the superstructure, shall alternatively be developed. This refined impedance function is expected to improve the simulation accuracy of the substructure approach.
Keywords: Direct approach, impedance function, massless rigid foundation, soil-structure interaction, substructure approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 468754 Performance Assessment of Computational Gridon Weather Indices from HOAPS Data
Authors: Madhuri Bhavsar, Anupam K Singh, Shrikant Pradhan
Abstract:
Long term rainfall analysis and prediction is a challenging task especially in the modern world where the impact of global warming is creating complications in environmental issues. These factors which are data intensive require high performance computational modeling for accurate prediction. This research paper describes a prototype which is designed and developed on grid environment using a number of coupled software infrastructural building blocks. This grid enabled system provides the demanding computational power, efficiency, resources, user-friendly interface, secured job submission and high throughput. The results obtained using sequential execution and grid enabled execution shows that computational performance has enhanced among 36% to 75%, for decade of climate parameters. Large variation in performance can be attributed to varying degree of computational resources available for job execution. Grid Computing enables the dynamic runtime selection, sharing and aggregation of distributed and autonomous resources which plays an important role not only in business, but also in scientific implications and social surroundings. This research paper attempts to explore the grid enabled computing capabilities on weather indices from HOAPS data for climate impact modeling and change detection.Keywords: Climate model, Computational Grid, GridApplication, Heterogeneous Grid
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443753 Mathieu Stability of Offshore Buoyant Leg Storage and Regasification Platform
Authors: S. Chandrasekaran, P. A. Kiran
Abstract:
Increasing demand for large-sized Floating, Storage and Regasification Units (FSRUs) for oil and gas industries led to the development of novel geometric form of Buoyant Leg Storage and Regasification Platform (BLSRP). BLSRP consists of a circular deck supported by six buoyant legs placed symmetrically with respect to wave direction. Circular deck is connected to buoyant legs using hinged joints, which restrain transfer of rotational response from the legs to deck and vice-versa. Buoyant legs are connected to seabed using taut moored system with high initial pretension, enabling rigid body motion in vertical plane. Encountered environmental loads induce dynamic tether tension variations, which in turn affect stability of the platform. The present study investigates Mathieu stability of BLSRP under the postulated tether pullout cases by inducing additional tension in the tethers. From the numerical studies carried out, it is seen that postulated tether pullout on any one of the buoyant legs does not result in Mathieu type instability even under excessive tether tension. This is due to the presence of hinged joints, which are capable of dissipating the unbalanced loads to other legs. However, under tether pullout of consecutive buoyant legs, Mathieu-type instability is observed.Keywords: Offshore platforms, stability, postulated failure, dynamic tether tension.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 901752 Evaluation of Aquifer Protective Capacity and Soil Corrosivity Using Geoelectrical Method
Authors: M. T. Tsepav, Y. Adamu, M. A. Umar
Abstract:
A geoelectric survey was carried out in some parts of Angwan Gwari, an outskirt of Lapai Local Government Area on Niger State which belongs to the Nigerian Basement Complex, with the aim of evaluating the soil corrosivity, aquifer transmissivity and protective capacity of the area from which aquifer characterisation was made. The G41 Resistivity Meter was employed to obtain fifteen Schlumberger Vertical Electrical Sounding data along profiles in a square grid network. The data were processed using interpex 1-D sounding inversion software, which gives vertical electrical sounding curves with layered model comprising of the apparent resistivities, overburden thicknesses, and depth. This information was used to evaluate longitudinal conductance and transmissivities of the layers. The results show generally low resistivities across the survey area and an average longitudinal conductance variation from 0.0237Siemens in VES 6 to 0.1261Siemens in VES 15 with almost the entire area giving values less than 1.0 Siemens. The average transmissivity values range from 96.45 Ω.m2 in VES 4 to 299070 Ω.m2 in VES 1. All but VES 4 and VES14 had an average overburden greater than 400 Ω.m2, these results suggest that the aquifers are highly permeable to fluid movement within, leading to the possibility of enhanced migration and circulation of contaminants in the groundwater system and that the area is generally corrosive.
Keywords: Geoelectric survey, corrosivity, protective capacity, transmissivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2242751 Analysis of Noodle Production Process at Yan Hu Food Manufacturing: Basis for Production Improvement
Authors: Rhadinia Tayag-Relanes, Felina C. Young
Abstract:
This study was conducted to analyze the noodle production process at Yan Hu Food Manufacturing for the basis of production improvement. The study utilized the Plan, Do, Check, Act (PDCA) approach and record review in the gathering of data for the calendar year 2019, specifically from August to October, focusing on the noodle products miki, canton, and misua. A causal-comparative research design was employed to establish cause-effect relationships among the variables, using descriptive statistics and correlation to compute the data gathered. The findings indicate that miki, canton, and misua production have distinct cycle times and production outputs in every set of its production processes, as well as varying levels of wastage. The company has not yet established a formal allowable rejection rate for wastage; instead, this paper used a 1% wastage limit. We recommended the following: machines used for each process of the noodle product must be consistently maintained and monitored; an assessment of all the production operators should be conducted by assessing their performance statistically based on the output and the machine performance; a root cause analysis must be conducted to identify solutions to production issues; and, an improved recording system for input and output of the production process of each noodle product should be established to eliminate the poor recording of data.
Keywords: Production, continuous improvement, process, operations, Plan, Do, Check, Act approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31750 Analysis of One-Way and Two-Way FSI Approaches to Characterise the Flow Regime and the Mechanical Behaviour during Closing Manoeuvring Operation of a Butterfly Valve
Authors: M. Ezkurra, J. A. Esnaola, M. Martinez-Agirre, U. Etxeberria, U. Lertxundi, L. Colomo, M. Begiristain, I. Zurutuza
Abstract:
Butterfly valves are widely used industrial piping components as on-off and flow controlling devices. The main challenge in the design process of this type of valves is the correct dimensioning to ensure proper mechanical performance as well as to minimise flow losses that affect the efficiency of the system. Butterfly valves are typically dimensioned in a closed position based on mechanical approaches considering uniform hydrostatic pressure, whereas the flow losses are analysed by means of CFD simulations. The main limitation of these approaches is that they do not consider either the influence of the dynamics of the manoeuvring stage or coupled phenomena. Recent works have included the influence of the flow on the mechanical behaviour for different opening angles by means of one-way FSI approach. However, these works consider steady-state flow for the selected angles, not capturing the effect of the transient flow evolution during the manoeuvring stage. Two-way FSI modelling approach could allow overcoming such limitations providing more accurate results. Nevertheless, the use of this technique is limited due to the increase in the computational cost. In the present work, the applicability of FSI one-way and two-way approaches is evaluated for the analysis of butterfly valves, showing that not considering fluid-structure coupling involves not capturing the most critical situation for the valve disc.
Keywords: Butterfly valves, fluid-structure interaction, one-way approach, two-way approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598749 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm
Authors: Xiang Jianhong, Wang Cong, Wang Linyu
Abstract:
With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.
Keywords: telemedicine, fetal electrocardiogram, compressed sensing, joint sparse reconstruction, block sparse signal
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 511748 Diagnosing Dangerous Arrhythmia of Patients by Automatic Detecting of QRS Complexes in ECG
Authors: Jia-Rong Yeh, Ai-Hsien Li, Jiann-Shing Shieh, Yen-An Su, Chi-Yu Yang
Abstract:
In this paper, an automatic detecting algorithm for QRS complex detecting was applied for analyzing ECG recordings and five criteria for dangerous arrhythmia diagnosing are applied for a protocol type of automatic arrhythmia diagnosing system. The automatic detecting algorithm applied in this paper detected the distribution of QRS complexes in ECG recordings and related information, such as heart rate and RR interval. In this investigation, twenty sampled ECG recordings of patients with different pathologic conditions were collected for off-line analysis. A combinative application of four digital filters for bettering ECG signals and promoting detecting rate for QRS complex was proposed as pre-processing. Both of hardware filters and digital filters were applied to eliminate different types of noises mixed with ECG recordings. Then, an automatic detecting algorithm of QRS complex was applied for verifying the distribution of QRS complex. Finally, the quantitative clinic criteria for diagnosing arrhythmia were programmed in a practical application for automatic arrhythmia diagnosing as a post-processor. The results of diagnoses by automatic dangerous arrhythmia diagnosing were compared with the results of off-line diagnoses by experienced clinic physicians. The results of comparison showed the application of automatic dangerous arrhythmia diagnosis performed a matching rate of 95% compared with an experienced physician-s diagnoses.Keywords: Signal processing, electrocardiography (ECG), QRS complex, arrhythmia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517747 Interoperability Maturity Models for Consideration When Using School Management Systems in South Africa: A Scoping Review
Authors: Keneilwe Maremi, Marlien Herselman, Adele Botha
Abstract:
The main purpose and focus of this paper are to determine the Interoperability Maturity Models to consider when using School Management Systems (SMS). The importance of this is to inform and help schools with knowing which Interoperability Maturity Model is best suited for their SMS. To address the purpose, this paper will apply a scoping review to ensure that all aspects are provided. The scoping review will include papers written from 2012-2019 and a comparison of the different types of Interoperability Maturity Models will be discussed in detail, which includes the background information, the levels of interoperability, and area for consideration in each Maturity Model. The literature was obtained from the following databases: IEEE Xplore and Scopus, the following search engines were used: Harzings, and Google Scholar. The topic of the paper was used as a search term for the literature and the term ‘Interoperability Maturity Models’ was used as a keyword. The data were analyzed in terms of the definition of Interoperability, Interoperability Maturity Models, and levels of interoperability. The results provide a table that shows the focus area of concern for each Maturity Model (based on the scoping review where only 24 papers were found to be best suited for the paper out of 740 publications initially identified in the field). This resulted in the most discussed Interoperability Maturity Model for consideration (Information Systems Interoperability Maturity Model (ISIMM) and Organizational Interoperability Maturity Model for C2 (OIM)).
Keywords: Interoperability, Interoperability Maturity Model, School Management System, scoping review.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 798746 Low Resolution Single Neural Network Based Face Recognition
Authors: Jahan Zeb, Muhammad Younus Javed, Usman Qayyum
Abstract:
This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.Keywords: Average filtering, Bicubic Interpolation, Neurons, vectorization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750745 Scenarios for a Sustainable Energy Supply Results of a Case Study for Austria
Authors: Petra Wächter
Abstract:
A comprehensive discussion of feasible strategies for sustainable energy supply is urgently needed to achieve a turnaround of the current energy situation. The necessary fundamentals required for the development of a long term energy vision are lacking to a great extent due to the absence of reasonable long term scenarios that fulfill the requirements of climate protection and sustainable energy use. The contribution of the study is based on a search for sustainable energy paths in the long run for Austria. The analysis makes use of secondary data predominantly. The measures developed to avoid CO2 emissions and other ecological risk factors vary to a great extent among all economic sectors. This is shown by the calculation of CO2 cost of abatement curves. In this study it is demonstrated that the most effective technical measures with the lowest CO2 abatement costs yield solutions to the current energy problems. Various scenarios are presented concerning the question how the technological and environmental options for a sustainable energy system for Austria could look like in the long run. It is shown how sustainable energy can be supplied even with today-s technological knowledge and options available. The scenarios developed include an evaluation of the economic costs and ecological impacts. The results are not only applicable to Austria but demonstrate feasible and cost efficient ways towards a sustainable future.
Keywords: Cost of CO2 Abatement, Energy Economics, Energy Efficiency, Renewable Energy Technologies, Sustainable Energy and Development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663744 A Review on Factors Influencing Implementation of Secure Software Development Practices
Authors: Sri Lakshmi Kanniah, Mohd Naz’ri Mahrin
Abstract:
More and more businesses and services are depending on software to run their daily operations and business services. At the same time, cyber-attacks are becoming more covert and sophisticated, posing threats to software. Vulnerabilities exist in the software due to the lack of security practices during the phases of software development. Implementation of secure software development practices can improve the resistance to attacks. Many methods, models and standards for secure software development have been developed. However, despite the efforts, they still come up against difficulties in their deployment and the processes are not institutionalized. There is a set of factors that influence the successful deployment of secure software development processes. In this study, the methodology and results from a systematic literature review of factors influencing the implementation of secure software development practices is described. A total of 44 primary studies were analysed as a result of the systematic review. As a result of the study, a list of twenty factors has been identified. Some of factors that affect implementation of secure software development practices are: Involvement of the security expert, integration between security and development team, developer’s skill and expertise, development time and communication between stakeholders. The factors were further classified into four categories which are institutional context, people and action, project content and system development process. The results obtained show that it is important to take into account organizational, technical and people issues in order to implement secure software development initiatives.
Keywords: Secure software development, software development, software security, systematic literature review.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2493743 Computational Investigation of Air-Gas Venturi Mixer for Powered Bi-Fuel Diesel Engine
Authors: Mofid Gorjibandpy, Mehdi Kazemi Sangsereki
Abstract:
In a bi-fuel diesel engine, the carburetor plays a vital role in switching from fuel gas to petrol mode operation and viceversa. The carburetor is the most important part of the fuel system of a diesel engine. All diesel engines carry variable venturi mixer carburetors. The basic operation of the carburetor mainly depends on the restriction barrel called the venturi. When air flows through the venturi, its speed increases and its pressure decreases. The main challenge focuses on designing a mixing device which mixes the supplied gas is the incoming air at an optimum ratio. In order to surmount the identified problems, the way fuel gas and air flow in the mixer have to be analyzed. In this case, the Computational Fluid Dynamics or CFD approach is applied in design of the prototype mixer. The present work is aimed at further understanding of the air and fuel flow structure by performing CFD studies using a software code. In this study for mixing air and gas in the condition that has been mentioned in continuance, some mixers have been designed. Then using of computational fluid dynamics, the optimum mixer has been selected. The results indicated that mixer with 12 holes can produce a homogenous mixture than those of 8-holes and 6-holes mixer. Also the result showed that if inlet convergency was smoother than outlet divergency, the mixture get more homogenous, the reason of that is in increasing turbulence in outlet divergency.Keywords: Computational Fluid Dynamics, Venturi mixer, Air-fuel ratio, Turbulence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3981742 Pattern Recognition Based Prosthesis Control for Movement of Forearms Using Surface and Intramuscular EMG Signals
Authors: Anjana Goen, D. C. Tiwari
Abstract:
Myoelectric control system is the fundamental component of modern prostheses, which uses the myoelectric signals from an individual’s muscles to control the prosthesis movements. The surface electromyogram signal (sEMG) being noninvasive has been used as an input to prostheses controllers for many years. Recent technological advances has led to the development of implantable myoelectric sensors which enable the internal myoelectric signal (MES) to be used as input to these prostheses controllers. The intramuscular measurement can provide focal recordings from deep muscles of the forearm and independent signals relatively free of crosstalk thus allowing for more independent control sites. However, little work has been done to compare the two inputs. In this paper we have compared the classification accuracy of six pattern recognition based myoelectric controllers which use surface myoelectric signals recorded using untargeted (symmetric) surface electrode arrays to the same controllers with multichannel intramuscular myolectric signals from targeted intramuscular electrodes as inputs. There was no significant enhancement in the classification accuracy as a result of using the intramuscular EMG measurement technique when compared to the results acquired using the surface EMG measurement technique. Impressive classification accuracy (99%) could be achieved by optimally selecting only five channels of surface EMG.
Keywords: Discriminant Locality Preserving Projections (DLPP), myoelectric signal (MES), Sparse Principal Component Analysis (SPCA), Time Frequency Representations (TFRs).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407741 Precision Grinding of Titanium (Ti-6Al-4V) Alloy Using Nanolubrication
Authors: Ahmed A. D. Sarhan, Hong Wan Ping, M. Sayuti
Abstract:
In this current era of competitive machinery productions, the industries are designed to place more emphasis on the product quality and reduction of cost whilst abiding by the pollution-preventing policy. In attempting to delve into the concerns, the industries are aware that the effectiveness of existing lubrication systems must be improved to achieve power-efficient and pollution-preventing machining processes. As such, this research is targeted to study on a plausible solution to the issue in grinding titanium alloy (Ti-6Al-4V) by using nanolubrication, as an alternative to flood grinding. The aim of this research is to evaluate the optimum condition of grinding force and surface roughness using MQL lubricating system to deliver nano-oil at different level of weight concentration of Silicon Dioxide (SiO2) mixed normal mineral oil. Taguchi Design of Experiment (DoE) method is carried out using a standard Taguchi orthogonal array of L16(43) to find the optimized combination of weight concentration mixture of SiO2, nozzle orientation and pressure of MQL. Surface roughness and grinding force are also analyzed using signal-to-noise(S/N) ratio to determine the best level of each factor that are tested. Consequently, the best combination of parameters is tested for a period of time and the results are compared with conventional grinding method of dry and flood condition. The results show a positive performance of MQL nanolubrication.
Keywords: Grinding, MQL, precision grinding, Taguchi optimization, titanium alloy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884740 Locating Cultural Centers in Shiraz (Iran) Applying Geographic Information System (GIS)
Authors: R. Mokhtari Malekabadi, S. Ghaed Rahmati, S. Aram
Abstract:
Optimal cultural site selection is one of the ways that can lead to the promotion of citizenship culture in addition to ensuring the health and leisure of city residents. This study examines the social and cultural needs of the community and optimal cultural site allocation and after identifying the problems and shortcomings, provides a suitable model for finding the best location for these centers where there is the greatest impact on the promotion of citizenship culture. On the other hand, non-scientific methods cause irreversible impacts to the urban environment and citizens. But modern efficient methods can reduce these impacts. One of these methods is using geographical information systems (GIS). In this study, Analytical Hierarchy Process (AHP) method was used to locate the optimal cultural site. In AHP, three principles (decomposition), (comparative analysis), and (combining preferences) are used. The objectives of this research include providing optimal contexts for passing time and performing cultural activities by Shiraz residents and also proposing construction of some cultural sites in different areas of the city. The results of this study show the correct positioning of cultural sites based on social needs of citizens. Thus, considering the population parameters and radii access, GIS and AHP model for locating cultural centers can meet social needs of citizens.Keywords: Analytical Hierarchy Process (AHP), geographical information systems (GIS), Cultural site, locating, Shiraz.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1601