Search results for: "Power system voltage control using lp and artificial neural network"
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14055

Search results for: "Power system voltage control using lp and artificial neural network"

525 Energy-Aware Scheduling in Real-Time Systems: An Analysis of Fair Share Scheduling and Priority-Driven Preemptive Scheduling

Authors: Su Xiaohan, Jin Chicheng, Liu Yijing, Burra Venkata Durga Kumar

Abstract:

Energy-aware scheduling in real-time systems aims to minimize energy consumption, but issues related to resource reservation and timing constraints remain challenges. This study focuses on analyzing two scheduling algorithms, Fair-Share Scheduling (FFS) and Priority-Driven Preemptive Scheduling (PDPS), for solving these issues and energy-aware scheduling in real-time systems. Based on research on both algorithms and the processes of solving two problems, it can be found that FFS ensures fair allocation of resources but needs to improve with an imbalanced system load. And PDPS prioritizes tasks based on criticality to meet timing constraints through preemption but relies heavily on task prioritization and may not be energy efficient. Therefore, improvements to both algorithms with energy-aware features will be proposed. Future work should focus on developing hybrid scheduling techniques that minimize energy consumption through intelligent task prioritization, resource allocation, and meeting time constraints.

Keywords: Energy-aware scheduling, fair-share scheduling, priority-driven preemptive scheduling, real-time systems, optimization, resource reservation, timing constraints.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120
524 Numerical Simulation of Three-Dimensional Cavitating Turbulent Flow in Francis Turbines with ANSYS

Authors: Raza Abdulla Saeed

Abstract:

In this study, the three-dimensional cavitating turbulent flow in a complete Francis turbine is simulated using mixture model for cavity/liquid two-phase flows. Numerical analysis is carried out using ANSYS CFX software release 12, and standard k-ε turbulence model is adopted for this analysis. The computational fluid domain consist of spiral casing, stay vanes, guide vanes, runner and draft tube. The computational domain is discretized with a threedimensional mesh system of unstructured tetrahedron mesh. The finite volume method (FVM) is used to solve the governing equations of the mixture model. Results of cavitation on the runner’s blades under three different boundary conditions are presented and discussed. From the numerical results it has been found that the numerical method was successfully applied to simulate the cavitating two-phase turbulent flow through a Francis turbine, and also cavitation is clearly predicted in the form of water vapor formation inside the turbine. By comparison the numerical prediction results with a real runner; it’s shown that the region of higher volume fraction obtained by simulation is consistent with the region of runner cavitation damage.

Keywords: Computational Fluid Dynamics, Hydraulic Francis Turbine, Numerical Simulation, Two-Phase Mixture Cavitation Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3227
523 A Generic Middleware to Instantly Sync Intensive Writes of Heterogeneous Massive Data via Internet

Authors: Haitao Yang, Zhenjiang Ruan, Fei Xu, Lanting Xia

Abstract:

Industry data centers often need to sync data changes reliably and instantly from a large-scale of heterogeneous autonomous relational databases accessed via the not-so-reliable Internet, for which a practical generic sync middleware of low maintenance and operation costs is most wanted. To this demand, this paper presented a generic sync middleware system (GSMS), which has been developed, applied and optimized since 2006, holding the principles or advantages that it must be SyncML-compliant and transparent to data application layer logic without referring to implementation details of databases synced, does not rely on host computer operating systems deployed, and its construction is light weighted and hence of low cost. Regarding these hard commitments of developing GSMS, in this paper we stressed the significant optimization breakthrough of GSMS sync delay being well below a fraction of millisecond per record sync. A series of ultimate tests with GSMS sync performance were conducted for a persuasive example, in which the source relational database underwent a broad range of write loads (from one thousand to one million intensive writes within a few minutes). All these tests showed that the performance of GSMS is competent and smooth even under ultimate write loads.

Keywords: Heterogeneous massive data, instantly sync intensive writes, Internet generic middleware design, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 455
522 Nutritional Value Determination of Different Varieties of Oats and Barley Using Near-Infrared Spectroscopy Method for the Horses Nutrition

Authors: V. Viliene, V. Sasyte, A. Raceviciute-Stupeliene, R. Gruzauskas

Abstract:

In horse nutrition, the most suitable cereal for their rations composition could be defined as oats and barley. Oats have high nutritive value because it provides more protein, fiber, iron and zinc than other whole grains, has good taste, and an activity of stimulating metabolic changes in the body. Another cereal – barley is very similar to oats as a feed except for some characteristics that affect how it is used; however, barley is lower in fiber than oats and is classified as a "heavy" feed. The value of oats and barley grain, first of all is dependent on its composition. Near-infrared spectroscopy (NIRS) has long been considered and used as a significant method in component and quality analysis and as an emerging technology for authenticity applications for cereal quality control. This paper presents the chemical and amino acid composition of different varieties of barley and oats, also digestible energy of different cereals for horses. Ten different spring barley (n = 5) and oats (n = 5) varieties, grown in one location in Lithuania, were assayed for their chemical composition (dry matter, crude protein, crude fat, crude ash, crude fiber, starch) and amino acids content, digestible amino acids and amino acids digestibility. Also, the grains digestible energy for horses was calculated. The oats and barley samples reflectance spectra were measured by means of NIRS using Foss-Tecator DS2500 equipment. The chemical components: fat, crude protein, starch and fiber differed statistically (P<0.05) between the oats and barley varieties. The highest total amino acid content between oats was determined in variety Flamingsprofi (4.56 g/kg) and the lowest – variety Circle (3.57 g/kg), and between barley - respectively in varieties Publican (3.50 g/kg) and Sebastian (3.11 g/kg). The different varieties of oats digestible amino acid content varied from 3.11 g/kg to 4.07 g/kg; barley different varieties varied from 2.59 g/kg to 2.94 g/kg. The average amino acids digestibility of oats varied from 74.4% (Liz) to 95.6% (Fen) and in barley - from 75.8 % (Tre) to 89.6% (Fen). The amount of digestible energy in the analyzed varieties of oats and barley was an average compound 13.74 MJ/kg DM and 14.85 MJ/kg DM, respectively. An analysis of the results showed that different varieties of oats compared with barley are preferable for horse nutrition according to the crude fat, crude fiber, ash and separate amino acids content, but the analyzed barley varieties dominated the higher amounts of crude protein, the digestible Liz amount and higher DE content, and thus, could be recommended for making feed formulation for horses combining oats and barley, taking into account the chemical composition of using cereal varieties.

Keywords: Barley, digestive energy, horses, nutritional value, oats.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2255
521 Two Scenarios for Ultra-Light Overhead Conveyor System in Logistics Applications

Authors: Batin Latif Aylak, Bernd Noche

Abstract:

Overhead conveyor systems are in use in many installations around the world, meeting the widest range of applications possible. Overhead conveyor systems are particularly preferred in automotive industry but also at post offices. Overhead conveyor systems must always be integrated with a logistical process by finding the best way for a cheaper material flow in order to guarantee precise and fast workflows. With their help, any transport can take place without wasting ground and space, without excessive company capacity, lost or damaged products, erroneous delivery, endless travels and without wasting time. Ultra-light overhead conveyor systems are rope-based conveying systems with individually driven vehicles. The vehicles can move automatically on the rope and this can be realized by energy and signals. Crossings are realized by switches. Ultra-light overhead conveyor systems provide optimal material flow, which produces profit and saves time. This article introduces two new ultra-light overhead conveyor designs in logistics and explains their components. According to the explanation of the components, scenarios are created by means of their technical characteristics. The scenarios are visualized with the help of CAD software. After that, assumptions are made for application area. According to these assumptions scenarios are visualized. These scenarios help logistics companies achieve lower development costs as well as quicker market maturity.

Keywords: Logistics, material flow, overhead conveyor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1996
520 Removal of Boron from Waste Waters by Ion- Exchange in a Batch System

Authors: Pelin Demirçivi, Gülhayat Nasün-Saygılı

Abstract:

Boron minerals are very useful for various industrial activities, such as glass industry and detergent industry, due to its mechanical and chemical properties. During the production of boron compounds, many of these are introduced into the environment in the form of waste. Boron is also an important micro nutrient for the plants to vegetate but if it exists in high concentrations, it could have toxic effects. The maximum boron level in drinking water for human health is given as 0.3 mg/L in World Health Organization (WHO) standards. The toxic effects of boron should be noted especially for dry regions, thus, in recent years, increasing attention has been paid to remove the boron from waste waters. In this study, boron removal is implemented by ion exchange process using Amberlite IRA-743 resin. Amberlite IRA-743 resin is a boron specific resin and it belongs to the polymerizate sorbent group within the aminopolyol functional group. Batch studies were performed to investigate the effects of various experimental parameters, such as adsorbent dose, initial concentration and pH, on the removal of boron. It is found that, when the adsorbent dose increases removal of boron from the liquid phase increases. However, an increase in the initial concentration decreases the removal of boron. The effective pH values for removal of boron are determined between 8.5 and 9. Equilibrium isotherms were also analyzed by Langmuir and Freundlich isotherm models. The Langmuir isotherm is obeyed better than the Freundlich isotherm.

Keywords: Amberlite resin, boron removal, ion exchange, isotherm models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2422
519 Analytical Model Based Evaluation of Human Machine Interfaces Using Cognitive Modeling

Authors: Belkacem Chikhaoui, Helene Pigot

Abstract:

Cognitive models allow predicting some aspects of utility and usability of human machine interfaces (HMI), and simulating the interaction with these interfaces. The action of predicting is based on a task analysis, which investigates what a user is required to do in terms of actions and cognitive processes to achieve a task. Task analysis facilitates the understanding of the system-s functionalities. Cognitive models are part of the analytical approaches, that do not associate the users during the development process of the interface. This article presents a study about the evaluation of a human machine interaction with a contextual assistant-s interface using ACTR and GOMS cognitive models. The present work shows how these techniques may be applied in the evaluation of HMI, design and research by emphasizing firstly the task analysis and secondly the time execution of the task. In order to validate and support our results, an experimental study of user performance is conducted at the DOMUS laboratory, during the interaction with the contextual assistant-s interface. The results of our models show that the GOMS and ACT-R models give good and excellent predictions respectively of users performance at the task level, as well as the object level. Therefore, the simulated results are very close to the results obtained in the experimental study.

Keywords: HMI, interface evaluation, Analytical evaluation, cognitivemodeling, user modeling, user performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1531
518 Numerical Study of Natural Convection Effects in Latent Heat Storage using Aluminum Fins and Spiral Fillers

Authors: Lippong Tan, Yuenting Kwok, Ahbijit Date, Aliakbar Akbarzadeh

Abstract:

A numerical investigation has carried out to understand the melting characteristics of phase change material (PCM) in a fin type latent heat storage with the addition of embedded aluminum spiral fillers. It is known that melting performance of PCM can be significantly improved by increasing the number of embedded metallic fins in the latent heat storage system but to certain values where only lead to small improvement in heat transfer rate. Hence, adding aluminum spiral fillers within the fin gap can be an option to improve heat transfer internally. This paper presents extensive computational visualizations on the PCM melting patterns of the proposed fin-spiral fillers configuration. The aim of this investigation is to understand the PCM-s melting behaviors by observing the natural convection currents movement and melting fronts formation. Fluent 6.3 simulation software was utilized in producing twodimensional visualizations of melting fractions, temperature distributions and flow fields to illustrate the melting process internally. The results show that adding aluminum spiral fillers in Fin type latent heat storage can promoted small but more active natural convection currents and improve melting of PCM.

Keywords: Phase change material, thermal enhancement, aluminum spiral fillers, fins

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3405
517 An Investigation on the Accuracy of Nonlinear Static Procedures for Seismic Evaluation of Buckling-restrained Braced Frames

Authors: An Hong Nguyen, Chatpan Chintanapakdee, Toshiro Hayashikawa

Abstract:

Presented herein is an assessment of current nonlinear static procedures (NSPs) for seismic evaluation of bucklingrestrained braced frames (BRBFs) which have become a favorable lateral-force resisting system for earthquake resistant buildings. The bias and accuracy of modal, improved modal pushover analysis (MPA, IMPA) and mass proportional pushover (MPP) procedures are comparatively investigated when they are applied to BRBF buildings subjected to two sets of strong ground motions. The assessment is based on a comparison of seismic displacement demands such as target roof displacements, peak floor/roof displacements and inter-story drifts. The NSP estimates are compared to 'exact' results from nonlinear response history analysis (NLRHA). The response statistics presented show that the MPP procedure tends to significantly overestimate seismic demands of lower stories of tall buildings considered in this study while MPA and IMPA procedures provide reasonably accurate results in estimating maximum inter-story drift over all stories of studied BRBF systems.

Keywords: Buckling-restrained braced frames, nonlinearresponse history analysis, nonlinear static procedure, seismicdemands.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959
516 Face Recognition Using Double Dimension Reduction

Authors: M. A Anjum, M. Y. Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.

Keywords: Biometrics, DCT, Face Recognition, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
515 Motion Analysis for Duplicate Frame Removal in Wireless Capsule Endoscope Video

Authors: Min Kook Choi, Hyun Gyu Lee, Ryan You, Byeong-Seok Shin, Sang-Chul Lee

Abstract:

Wireless capsule Endoscopy (WCE) has rapidly shown its wide applications in medical domain last ten years thanks to its noninvasiveness for patients and support for thorough inspection through a patient-s entire digestive system including small intestine. However, one of the main barriers to efficient clinical inspection procedure is that it requires large amount of effort for clinicians to inspect huge data collected during the examination, i.e., over 55,000 frames in video. In this paper, we propose a method to compute meaningful motion changes of WCE by analyzing the obtained video frames based on regional optical flow estimations. The computed motion vectors are used to remove duplicate video frames caused by WCE-s imaging nature, such as repetitive forward-backward motions from peristaltic movements. The motion vectors are derived by calculating directional component vectors in four local regions. Our experiments are performed on small intestine area, which is of main interest to clinical experts when using WCEs, and our experimental results show significant frame reductions comparing with a simple frame-to-frame similarity-based image reduction method.

Keywords: Wireless capsule endoscopy, optical flow, duplicated image, duplicated frame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
514 Applying the Regression Technique for Prediction of the Acute Heart Attack

Authors: Paria Soleimani, Arezoo Neshati

Abstract:

Myocardial infarction is one of the leading causes of death in the world. Some of these deaths occur even before the patient reaches the hospital. Myocardial infarction occurs as a result of impaired blood supply. Because the most of these deaths are due to coronary artery disease, hence the awareness of the warning signs of a heart attack is essential. Some heart attacks are sudden and intense, but most of them start slowly, with mild pain or discomfort, then early detection and successful treatment of these symptoms is vital to save them. Therefore, importance and usefulness of a system designing to assist physicians in early diagnosis of the acute heart attacks is obvious. The main purpose of this study would be to enable patients to become better informed about their condition and to encourage them to seek professional care at an earlier stage in the appropriate situations. For this purpose, the data were collected on 711 heart patients in Iran hospitals. 28 attributes of clinical factors can be reported by patients; were studied. Three logistic regression models were made on the basis of the 28 features to predict the risk of heart attacks. The best logistic regression model in terms of performance had a C-index of 0.955 and with an accuracy of 94.9%. The variables, severe chest pain, back pain, cold sweats, shortness of breath, nausea and vomiting, were selected as the main features.

Keywords: Coronary heart disease, acute heart attacks, prediction, logistic regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2425
513 FEM Simulation of Triple Diffusive Magnetohydrodynamics Effect of Nanofluid Flow over a Nonlinear Stretching Sheet

Authors: Rangoli Goyal, Rama Bhargava

Abstract:

The triple diffusive boundary layer flow of nanofluid under the action of constant magnetic field over a non-linear stretching sheet has been investigated numerically. The model includes the effect of Brownian motion, thermophoresis, and cross-diffusion; slip mechanisms which are primarily responsible for the enhancement of the convective features of nanofluid. The governing partial differential equations are transformed into a system of ordinary differential equations (by using group theory transformations) and solved numerically by using variational finite element method. The effects of various controlling parameters, such as the magnetic influence number, thermophoresis parameter, Brownian motion parameter, modified Dufour parameter, and Dufour solutal Lewis number, on the fluid flow as well as on heat and mass transfer coefficients (both of solute and nanofluid) are presented graphically and discussed quantitatively. The present study has industrial applications in aerodynamic extrusion of plastic sheets, coating and suspensions, melt spinning, hot rolling, wire drawing, glass-fibre production, and manufacture of polymer and rubber sheets, where the quality of the desired product depends on the stretching rate as well as external field including magnetic effects.

Keywords: FEM, Thermophoresis, Diffusiophoresis, Brownian motion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
512 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation

Authors: Ke He, Wumaier Parezhati, Haruka Yamashita

Abstract:

Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.

Keywords: Doc2Vec, marketing, online marketplace, recommendation system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 466
511 Optimization of Molasses Desugarization Process Using Steffen Method in Sugar Beet Factories

Authors: Simin Asadollahi, Mohammad Hossein Haddad Khodaparast

Abstract:

Molasses is one of the most important by-products in sugar industry, which contains a large amount of sucrose. The routine way to separate the sucrose from molasses is using steffen method. Whereas this method is very usual in sugar factories, the aim of this research is optimization of this method. Mentioned optimization depends to three factors of reactor alkality, reactor temperature and diluted molasses brix. Accordingly, three different stages must be done:

  1. Construction of a pilot plant similar to actual steffen system in sugar factories
  2. Experimenting using the pilot plant
  3. Laboratory analysis

These experiences included 27 treatments in three replications. In each replication, brix, polarization and purity characters in Saccharate syrup and hot and cold waste were measured. The results showed that diluted molasses brix, reactor alkality and reactor temperature had many significant effects on Saccharate purity and efficiency of molasses desugarization. This research was performed in "randomize complete design" form & was analyzed with "duncan multiple range test". The significant difference in the level of α = 5% is observed between the treatments. The results indicated that the optimal conditions for molasses desugarization by steffen method are: diluted molasses brix= 10, reactor alkality= 10 and reactor temperature=8˚C. 

Keywords: Molasses desugarization, Saccharate purity, Steffen process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3006
510 Flutter Analysis of Slender Beams with Variable Cross Sections Based on Integral Equation Formulation

Authors: Z. El Felsoufi, L. Azrar

Abstract:

This paper studies a mathematical model based on the integral equations for dynamic analyzes numerical investigations of a non-uniform or multi-material composite beam. The beam is subjected to a sub-tangential follower force and elastic foundation. The boundary conditions are represented by generalized parameterized fixations by the linear and rotary springs. A mathematical formula based on Euler-Bernoulli beam theory is presented for beams with variable cross-sections. The non-uniform section introduces non-uniformity in the rigidity and inertia of beams and consequently, more complicated equilibrium who governs the equation. Using the boundary element method and radial basis functions, the equation of motion is reduced to an algebro-differential system related to internal and boundary unknowns. A generalized formula for the deflection, the slope, the moment and the shear force are presented. The free vibration of non-uniform loaded beams is formulated in a compact matrix form and all needed matrices are explicitly given. The dynamic stability analysis of slender beam is illustrated numerically based on the coalescence criterion. A realistic case related to an industrial chimney is investigated.

Keywords: Chimney, BEM and integral equation formulation, non uniform cross section, vibration and Flutter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620
509 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach

Authors: Mukesh Kumar Shah, Tushar Gupta

Abstract:

An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.

Keywords: Economic dispatch, Gaussian selection operator, prohibited operating zones, ramp rate limits, upgraded cuckoo search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 684
508 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-

Authors: Nieto Bernal Wilson, Carmona Suarez Edgar

Abstract:

The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects.  Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.

Keywords: Data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
507 Numerical Simulation for a Shallow Braced Excavation of Campus Building

Authors: Sao-Jeng Chao, Wen-Cheng Chen, Wei-Humg Lu

Abstract:

In order to prevent encountering unpredictable factors, geotechnical engineers always conduct numerical analysis for braced excavation design. Simulation work in advance can predict the response of subsequent excavation and thus will be designed to increase the security coefficient of construction. The parameters that are considered include geological conditions, soil properties, soil distributions, loading types, and the analysis and design methods. National Ilan University is located on the LanYang plain, mainly deposited by clayey soil and loose sand, and thus is vulnerable to external influence displacement. National Ilan University experienced a construction of braced excavation with a complete program of monitoring excavation. This study takes advantage of a one-dimensional finite element method RIDO to simulate the excavation process. The predicted results from numerical simulation analysis are compared with the monitored results of construction to explore the differences between them. Numerical simulation analysis of the excavation process can be used to analyze retaining structures for the purpose of understanding the relationship between the displacement and supporting system. The resulting deformation and stress distribution from the braced excavation cab then be understand in advance. The problems can be prevented prior to the construction process, and thus acquire all the affected important factors during design and construction.

Keywords: Excavation, numerical simulation, rido, retaining structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918
506 Time-Domain Analysis Approaches of Soil-Structure Interaction: A Comparative Study

Authors: Abdelrahman Taha, Niloofar Malekghaini, Hamed Ebrahimian, Ramin Motamed

Abstract:

This paper compares the substructure and direct approaches for soil-structure interaction (SSI) analysis in the time domain. In the substructure approach, the soil domain is replaced by a set of springs and dashpots, also referred to as the impedance function, derived through the study of the behavior of a massless rigid foundation. The impedance function is inherently frequency dependent, i.e., it varies as a function of the frequency content of the structural response. To use the frequency-dependent impedance function for time-domain SSI analysis, the impedance function is approximated at the fundamental frequency of the coupled soil-structure system. To explore the potential limitations of the substructure modeling process, a two-dimensional (2D) reinforced concrete frame structure is modeled and analyzed using the direct and substructure approaches. The results show discrepancy between the simulated responses of the direct and substructure models. It is concluded that the main source of discrepancy is likely attributed to the way the impedance functions are calculated, i.e., assuming a massless rigid foundation without considering the presence of the superstructure. Hence, a refined impedance function, considering the presence of the superstructure, shall alternatively be developed. This refined impedance function is expected to improve the simulation accuracy of the substructure approach.

Keywords: Direct approach, impedance function, massless rigid foundation, soil-structure interaction, substructure approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 468
505 Mathieu Stability of Offshore Buoyant Leg Storage and Regasification Platform

Authors: S. Chandrasekaran, P. A. Kiran

Abstract:

Increasing demand for large-sized Floating, Storage and Regasification Units (FSRUs) for oil and gas industries led to the development of novel geometric form of Buoyant Leg Storage and Regasification Platform (BLSRP). BLSRP consists of a circular deck supported by six buoyant legs placed symmetrically with respect to wave direction. Circular deck is connected to buoyant legs using hinged joints, which restrain transfer of rotational response from the legs to deck and vice-versa. Buoyant legs are connected to seabed using taut moored system with high initial pretension, enabling rigid body motion in vertical plane. Encountered environmental loads induce dynamic tether tension variations, which in turn affect stability of the platform. The present study investigates Mathieu stability of BLSRP under the postulated tether pullout cases by inducing additional tension in the tethers. From the numerical studies carried out, it is seen that postulated tether pullout on any one of the buoyant legs does not result in Mathieu type instability even under excessive tether tension. This is due to the presence of hinged joints, which are capable of dissipating the unbalanced loads to other legs. However, under tether pullout of consecutive buoyant legs, Mathieu-type instability is observed.

Keywords: Offshore platforms, stability, postulated failure, dynamic tether tension.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 901
504 Analysis of Noodle Production Process at Yan Hu Food Manufacturing: Basis for Production Improvement

Authors: Rhadinia Tayag-Relanes, Felina C. Young

Abstract:

This study was conducted to analyze the noodle production process at Yan Hu Food Manufacturing for the basis of production improvement. The study utilized the Plan, Do, Check, Act (PDCA) approach and record review in the gathering of data for the calendar year 2019, specifically from August to October, focusing on the noodle products miki, canton, and misua. A causal-comparative research design was employed to establish cause-effect relationships among the variables, using descriptive statistics and correlation to compute the data gathered. The findings indicate that miki, canton, and misua production have distinct cycle times and production outputs in every set of its production processes, as well as varying levels of wastage. The company has not yet established a formal allowable rejection rate for wastage; instead, this paper used a 1% wastage limit. We recommended the following: machines used for each process of the noodle product must be consistently maintained and monitored; an assessment of all the production operators should be conducted by assessing their performance statistically based on the output and the machine performance; a root cause analysis must be conducted to identify solutions to production issues; and, an improved recording system for input and output of the production process of each noodle product should be established to eliminate the poor recording of data.

Keywords: Production, continuous improvement, process, operations, Plan, Do, Check, Act approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29
503 Analysis of One-Way and Two-Way FSI Approaches to Characterise the Flow Regime and the Mechanical Behaviour during Closing Manoeuvring Operation of a Butterfly Valve

Authors: M. Ezkurra, J. A. Esnaola, M. Martinez-Agirre, U. Etxeberria, U. Lertxundi, L. Colomo, M. Begiristain, I. Zurutuza

Abstract:

Butterfly valves are widely used industrial piping components as on-off and flow controlling devices. The main challenge in the design process of this type of valves is the correct dimensioning to ensure proper mechanical performance as well as to minimise flow losses that affect the efficiency of the system. Butterfly valves are typically dimensioned in a closed position based on mechanical approaches considering uniform hydrostatic pressure, whereas the flow losses are analysed by means of CFD simulations. The main limitation of these approaches is that they do not consider either the influence of the dynamics of the manoeuvring stage or coupled phenomena. Recent works have included the influence of the flow on the mechanical behaviour for different opening angles by means of one-way FSI approach. However, these works consider steady-state flow for the selected angles, not capturing the effect of the transient flow evolution during the manoeuvring stage. Two-way FSI modelling approach could allow overcoming such limitations providing more accurate results. Nevertheless, the use of this technique is limited due to the increase in the computational cost. In the present work, the applicability of FSI one-way and two-way approaches is evaluated for the analysis of butterfly valves, showing that not considering fluid-structure coupling involves not capturing the most critical situation for the valve disc.

Keywords: Butterfly valves, fluid-structure interaction, one-way approach, two-way approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598
502 Dynamics of Protest Mobilization and Rapid Demobilization in Post-2001 Afghanistan: Facing Enlightening Movement

Authors: Ali Aqa Mohammad Jawad

Abstract:

Taking a relational approach, this paper analyzes the causal mechanisms associated with successful mobilization and rapid demobilization of the Enlightening Movement in post-2001 Afghanistan. The movement emerged after the state-owned Da Afghan Bereshna Sherkat (DABS) decided to divert the route for the Turkmenistan-Uzbekistan-Tajikistan-Afghanistan-Pakistan (TUTAP) electricity project. The grid was initially planned to go through the Hazara-inhabited province of Bamiyan, according to Afghanistan’s Power Sector Master Plan. The reroute served as an aide-mémoire of historical subordination to other ethno-religious groups for the Hazara community. It was also perceived as deprivation from post-2001 development projects, financed by international aid. This torched the accumulated grievances, which then gave birth to the Enlightening Movement. The movement had a successful mobilization. However, it demobilized after losing much of its mobilizing capabilities through an amalgamation of external and internal relational factors. The successful mobilization yet rapid demobilization constitutes the puzzle of this paper. From the theoretical perspective, this paper is significant as it establishes the applicability of contentious politics theory to protest mobilizations that occurred in Afghanistan, a context-specific, characterized by ethnic politics. Both primary and secondary data are utilized to address the puzzle. As for the primary resources, media coverage, interviews, reports, public media statements of the movement, involved in contentious performances, and data from Social Networking Services (SNS) are used. The covered period is from 2001-2018. As for the secondary resources, published academic articles and books are used to give a historical account of contentious politics. For data analysis, a qualitative comparative historical method is utilized to uncover the causal mechanisms associated with successful mobilization and rapid demobilization of the Movement. In this pursuit, both mobilization and demobilization are considered as larger political processes that could be decomposed to constituent mechanisms. Enlightening Movement’s framing and campaigns are first studied to uncover the associated mechanisms. Then, to avoid introducing some ad hoc mechanisms, the recurrence of mechanisms is checked against another case. Mechanisms qualify as robust if they are “recurrent” in different episodes of contention. Checking the recurrence of causal mechanisms is vital as past contentious events tend to reinforce future events. The findings of this paper suggest that the public sphere in Afghanistan is drastically different from Western democracies known as the birthplace of social movements. In Western democracies, when institutional politics did not respond, movement organizers occupied the public sphere, undermining the legitimacy of the government. In Afghanistan, the public sphere is ethicized. Considering the inter- and intra-relational dynamics of ethnic groups in Afghanistan, the movement reduced to an erosive inter- and intra-ethnic conflict. This undermined the cohesiveness of the movement, which then kicked-off its demobilization process.

Keywords: Enlightening movement, contentious politics, mobilization, demobilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 817
501 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm

Authors: Xiang Jianhong, Wang Cong, Wang Linyu

Abstract:

With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.

Keywords: telemedicine, fetal electrocardiogram, compressed sensing, joint sparse reconstruction, block sparse signal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 511
500 Diagnosing Dangerous Arrhythmia of Patients by Automatic Detecting of QRS Complexes in ECG

Authors: Jia-Rong Yeh, Ai-Hsien Li, Jiann-Shing Shieh, Yen-An Su, Chi-Yu Yang

Abstract:

In this paper, an automatic detecting algorithm for QRS complex detecting was applied for analyzing ECG recordings and five criteria for dangerous arrhythmia diagnosing are applied for a protocol type of automatic arrhythmia diagnosing system. The automatic detecting algorithm applied in this paper detected the distribution of QRS complexes in ECG recordings and related information, such as heart rate and RR interval. In this investigation, twenty sampled ECG recordings of patients with different pathologic conditions were collected for off-line analysis. A combinative application of four digital filters for bettering ECG signals and promoting detecting rate for QRS complex was proposed as pre-processing. Both of hardware filters and digital filters were applied to eliminate different types of noises mixed with ECG recordings. Then, an automatic detecting algorithm of QRS complex was applied for verifying the distribution of QRS complex. Finally, the quantitative clinic criteria for diagnosing arrhythmia were programmed in a practical application for automatic arrhythmia diagnosing as a post-processor. The results of diagnoses by automatic dangerous arrhythmia diagnosing were compared with the results of off-line diagnoses by experienced clinic physicians. The results of comparison showed the application of automatic dangerous arrhythmia diagnosis performed a matching rate of 95% compared with an experienced physician-s diagnoses.

Keywords: Signal processing, electrocardiography (ECG), QRS complex, arrhythmia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
499 Interoperability Maturity Models for Consideration When Using School Management Systems in South Africa: A Scoping Review

Authors: Keneilwe Maremi, Marlien Herselman, Adele Botha

Abstract:

The main purpose and focus of this paper are to determine the Interoperability Maturity Models to consider when using School Management Systems (SMS). The importance of this is to inform and help schools with knowing which Interoperability Maturity Model is best suited for their SMS. To address the purpose, this paper will apply a scoping review to ensure that all aspects are provided. The scoping review will include papers written from 2012-2019 and a comparison of the different types of Interoperability Maturity Models will be discussed in detail, which includes the background information, the levels of interoperability, and area for consideration in each Maturity Model. The literature was obtained from the following databases: IEEE Xplore and Scopus, the following search engines were used: Harzings, and Google Scholar. The topic of the paper was used as a search term for the literature and the term ‘Interoperability Maturity Models’ was used as a keyword. The data were analyzed in terms of the definition of Interoperability, Interoperability Maturity Models, and levels of interoperability. The results provide a table that shows the focus area of concern for each Maturity Model (based on the scoping review where only 24 papers were found to be best suited for the paper out of 740 publications initially identified in the field). This resulted in the most discussed Interoperability Maturity Model for consideration (Information Systems Interoperability Maturity Model (ISIMM) and Organizational Interoperability Maturity Model for C2 (OIM)).

Keywords: Interoperability, Interoperability Maturity Model, School Management System, scoping review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 798
498 A Novel and Green Approach to Produce Nano- Porous Materials Zeolite A and MCM-41 from Coal Fly Ash and their Applications in Environmental Protection

Authors: K. S. Hui, K. N. Hui, Seong Kon Lee

Abstract:

Zeolite A and MCM-41 have extensive applications in basic science, petrochemical science, energy conservation/storage, medicine, chemical sensor, air purification, environmentally benign composite structure and waste remediation. However, the use of zeolite A and MCM-41 in these areas, especially environmental remediation, are restricted due to prohibitive production cost. Efficient recycling of and resource recovery from coal fly ash has been a major topic of current international research interest, aimed at achieving sustainable development of human society from the viewpoints of energy, economy, and environmental strategy. This project reported an original, novel, green and fast methods to produce nano-porous zeolite A and MCM-41 materials from coal fly ash. For zeolite A, this novel production method allows a reduction by half of the total production time while maintaining a high degree of crystallinity of zeolite A which exists in a narrower particle size distribution. For MCM-41, this remarkably green approach, being an environmentally friendly process and reducing generation of toxic waste, can produce pure and long-range ordered MCM-41 materials from coal fly ash. This approach took 24 h at 25 oC to produce 9 g of MCM-41 materials from 30 g of the coal fly ash, which is the shortest time and lowest reaction temperature required to produce pure and ordered MCM-41 materials (having the largest internal surface area) compared to the values reported in the literature. Performance evaluation of the produced zeolite A and MCM-41 materials in wastewater treatment and air pollution control were reported. The residual fly ash was also converted to zeolite Na-P1 which showed good performance in removal of multi-metal ions in wastewater. In wastewater treatment, compared to commercial-grade zeolite A, adsorbents produced from coal fly ash were effective in removing multi heavy metal ions in water and could be an alternative material for treatment of wastewater. In methane emission abatement, the zeolite A (produced from coal fly ash) achieved similar methane removal efficiency compared to the zeolite A prepared from pure chemicals. This report provides the guidance for production of zeolite A and MCM-41 from coal fly ash by a cost-effective approach which opens potential applications of these materials in environmental industry. Finally, environmental and economic aspects of production of zeolite A and MCM-41 from coal fly ash were discussed.

Keywords: Metal ions, waste water, methane, volatile organic compounds

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2255
497 Scenarios for a Sustainable Energy Supply Results of a Case Study for Austria

Authors: Petra Wächter

Abstract:

A comprehensive discussion of feasible strategies for sustainable energy supply is urgently needed to achieve a turnaround of the current energy situation. The necessary fundamentals required for the development of a long term energy vision are lacking to a great extent due to the absence of reasonable long term scenarios that fulfill the requirements of climate protection and sustainable energy use. The contribution of the study is based on a search for sustainable energy paths in the long run for Austria. The analysis makes use of secondary data predominantly. The measures developed to avoid CO2 emissions and other ecological risk factors vary to a great extent among all economic sectors. This is shown by the calculation of CO2 cost of abatement curves. In this study it is demonstrated that the most effective technical measures with the lowest CO2 abatement costs yield solutions to the current energy problems. Various scenarios are presented concerning the question how the technological and environmental options for a sustainable energy system for Austria could look like in the long run. It is shown how sustainable energy can be supplied even with today-s technological knowledge and options available. The scenarios developed include an evaluation of the economic costs and ecological impacts. The results are not only applicable to Austria but demonstrate feasible and cost efficient ways towards a sustainable future.

Keywords: Cost of CO2 Abatement, Energy Economics, Energy Efficiency, Renewable Energy Technologies, Sustainable Energy and Development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663
496 A Review on Factors Influencing Implementation of Secure Software Development Practices

Authors: Sri Lakshmi Kanniah, Mohd Naz’ri Mahrin

Abstract:

More and more businesses and services are depending on software to run their daily operations and business services. At the same time, cyber-attacks are becoming more covert and sophisticated, posing threats to software. Vulnerabilities exist in the software due to the lack of security practices during the phases of software development. Implementation of secure software development practices can improve the resistance to attacks. Many methods, models and standards for secure software development have been developed. However, despite the efforts, they still come up against difficulties in their deployment and the processes are not institutionalized. There is a set of factors that influence the successful deployment of secure software development processes. In this study, the methodology and results from a systematic literature review of factors influencing the implementation of secure software development practices is described. A total of 44 primary studies were analysed as a result of the systematic review. As a result of the study, a list of twenty factors has been identified. Some of factors that affect implementation of secure software development practices are: Involvement of the security expert, integration between security and development team, developer’s skill and expertise, development time and communication between stakeholders. The factors were further classified into four categories which are institutional context, people and action, project content and system development process. The results obtained show that it is important to take into account organizational, technical and people issues in order to implement secure software development initiatives.

Keywords: Secure software development, software development, software security, systematic literature review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2493