Search results for: facility layout problem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3850

Search results for: facility layout problem

2590 Requirements Driven Multiple View Paradigm for Developing Security Architecture

Authors: K. Chandra Sekaran

Abstract:

This paper describes a paradigmatic approach to develop architecture of secure systems by describing the requirements from four different points of view: that of the owner, the administrator, the user, and the network. Deriving requirements and developing architecture implies the joint elicitation and describing the problem and the structure of the solution. The view points proposed in this paper are those we consider as requirements towards their contributions as major parties in the design, implementation, usage and maintenance of secure systems. The dramatic growth of the technology of Internet and the applications deployed in World Wide Web have lead to the situation where the security has become a very important concern in the development of secure systems. Many security approaches are currently being used in organizations. In spite of the widespread use of many different security solutions, the security remains a problem. It is argued that the approach that is described in this paper for the development of secure architecture is practical by all means. The models representing these multiple points of view are termed the requirements model (views of owner and administrator) and the operations model (views of user and network). In this paper, this multiple view paradigm is explained by first describing the specific requirements and or characteristics of secure systems (particularly in the domain of networks) and the secure architecture / system development methodology.

Keywords: Multiple view paradigms, requirements model, operations model, secure system, owner, administrator, user, network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
2589 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — In the Case of Critical Dataset Size —

Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno

Abstract:

STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to real-world data

Keywords: Rule induction, decision table, missing data, noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1468
2588 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets

Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar

Abstract:

Numerical studies have been carried out using a validated two-dimensional standard k-omega turbulence model for the design optimization of a thrust vector control system using shock induced self-impinging supersonic secondary double jet. Parametric analytical studies have been carried out at different secondary injection locations to identifying the highest unsymmetrical distribution of the main gas flow due to shock waves, which produces a desirable side force more lucratively for vectoring. The results from the parametric studies of the case on hand reveal that the shock induced self-impinging supersonic secondary double jet is more efficient in certain locations at the divergent region of a CD nozzle than a case with supersonic single jet with same mass flow rate. We observed that the best axial location of the self-impinging supersonic secondary double jet nozzle with a given jet interaction angle, built-in to a CD nozzle having area ratio 1.797, is 0.991 times the primary nozzle throat diameter from the throat location. We also observed that the flexible steering is possible after invoking ON/OFF facility to the secondary nozzles for meeting the onboard mission requirements. Through our case studies we concluded that the supersonic self-impinging secondary double jet at predesigned jet interaction angle and location can provide more flexible steering options facilitating with 8.81% higher thrust vectoring efficiency than the conventional supersonic single secondary jet without compromising the payload capability of any supersonic aerospace vehicle.

Keywords: Fluidic thrust vectoring, rocket steering, self-impinging secondary supersonic jet, TVC in aerospace vehicles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2682
2587 Offset Dependent Uniform Delay Mathematical Optimization Model for Signalized Traffic Network Using Differential Evolution Algorithm

Authors: Tahseen Al-Shaikhli, Halim Ceylan, Jonathan Weaver, Osman Nuri Çelik, Onur Gungor Sahin

Abstract:

A concept of uniform delay offset dependent mathematical optimization problem is derived as the main objective for this study using a differential evolution algorithm. Furthermore, the objectives are to control the coordination problem which mainly depends on offset selection, and to estimate the uniform delay based on the offset choice at each signalized intersection. The assumption is the periodic sinusoidal function for arrival and departure patterns. The cycle time is optimized at the entry links and the optimized value is used in the non-entry links as a common cycle time. The offset optimization algorithm is used to calculate the uniform delay at each link. The results are illustrated by using a case study and compared with the canonical uniform delay model derived by Webster and the highway capacity manual’s model. The findings show that the derived model minimizes the total uniform delay to almost half compared to conventional models; the mathematical objective function is robust; the algorithm convergence time is fast.

Keywords: Area traffic control, differential evolution, offset variable, sinusoidal periodic function, traffic flow, uniform delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 380
2586 Application of Systems Engineering Tools and Methods to Improve Healthcare Delivery Inside the Emergency Department of a Mid-Size Hospital

Authors: Mohamed Elshal, Hazim El-Mounayri, Omar El-Mounayri

Abstract:

Emergency department (ED) is considered as a complex system of interacting entities: patients, human resources, software and hardware systems, interfaces, and other systems. This paper represents a research for implementing a detailed Systems Engineering (SE) approach in a mid-size hospital in central Indiana. This methodology will be applied by “The Initiative for Product Lifecycle Innovation (IPLI)” institution at Indiana University to study and solve the crowding problem with the aim of increasing throughput of patients and enhance their treatment experience; therefore, the nature of crowding problem needs to be investigated with all other problems that leads to it. The presented SE methods are workflow analysis and systems modeling where SE tools such as Microsoft Visio are used to construct a group of system-level diagrams that demonstrate: patient’s workflow, documentation and communication flow, data systems, human resources workflow and requirements, leadership involved, and integration between ER different systems. Finally, the ultimate goal will be managing the process through implementation of an executable model using commercialized software tools, which will identify bottlenecks, improve documentation flow, and help make the process faster.

Keywords: Systems modeling, ED operation, workflow modeling, systems analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046
2585 Corrosion Mitigation in Gas Facilities Piping through the Use of Fusion Bond Epoxy Coated Pipes and Corrosion Resistant Alloy Girth Welds

Authors: Saad Alkhaldi, Fadi Ghammas, Tariq Alghamdi, Stefano Alexandirs

Abstract:

The operating conditions and corrosive nature of the process fluid in the Haradh and Hawiyah areas are subjecting facility piping to undesirable corrosion phenomena. Therefore, production headers inside remote headers have been internally cladded with high alloy material to mitigate the corrosion damage mechanism. Corrosion mitigation in the jump-over lines, constructed between the existing flowlines and the newly constructed facilities to provide operational flexibility, is proposed. This corrosion mitigation system includes the application of fusion bond epoxy (FBE) coating on the internal surface of the pipe and depositing corrosion-resistant alloy (CRA) weld layers at pipe and fittings ends to protect the carbon steel material. In addition, high alloy CRA weld material is used to deposit the girth weld between the 90-degree elbows and mating internally coated segments. A rigorous testing and qualification protocol was established prior to actual adoption at the Haradh and Hawiyah Field Gas Compression Program, currently being executed by Saudi Aramco. The proposed mitigation system, aimed at applying the cladding at the ends of the internally FBE coated pipes/elbows, will resolve field joint coating challenges, eliminate the use of approximately 1700 breakout flanges, and prevent the potential hydrocarbon leaks.

Keywords: Corrosion, FBE coated sour service, cost savings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 336
2584 3D Dense Correspondence for 3D Dense Morphable Face Shape Model

Authors: Tae in Seol, Sun-Tae Chung, Seongwon Cho

Abstract:

Realistic 3D face model is desired in various applications such as face recognition, games, avatars, animations, and etc. Construction of 3D face model is composed of 1) building a face shape model and 2) rendering the face shape model. Thus, building a realistic 3D face shape model is an essential step for realistic 3D face model. Recently, 3D morphable model is successfully introduced to deal with the various human face shapes. 3D dense correspondence problem should be precedently resolved for constructing a realistic 3D dense morphable face shape model. Several approaches to 3D dense correspondence problem in 3D face modeling have been proposed previously, and among them optical flow based algorithms and TPS (Thin Plate Spline) based algorithms are representative. Optical flow based algorithms require texture information of faces, which is sensitive to variation of illumination. In TPS based algorithms proposed so far, TPS process is performed on the 2D projection representation in cylindrical coordinates of the 3D face data, not directly on the 3D face data and thus errors due to distortion in data during 2D TPS process may be inevitable. In this paper, we propose a new 3D dense correspondence algorithm for 3D dense morphable face shape modeling. The proposed algorithm does not need texture information and applies TPS directly on 3D face data. Through construction procedures, it is observed that the proposed algorithm constructs realistic 3D face morphable model reliably and fast.

Keywords: 3D Dense Correspondence, 3D Morphable Face Shape Model, 3D Face Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193
2583 Analysis of Urban Slum: Case Study of Korail Slum, Dhaka

Authors: Sanjida Ahmed Sinthia

Abstract:

Bangladesh is one of the poorest countries in the world. There are several reasons for this insufficiency and uncontrolled population growth is one of the prime reasons. Others include low economic progress, imbalanced resource management, unemployment and underemployment, urban migration and natural catastrophes etc. As a result, the rate of urban poor is increasing inevitably in every sphere of urban cities in Bangladesh and Dhaka is the most affected one. Besides there is scarcity of urban land, housing, urban infrastructure and amenities which create pressure on urban cities and mostly encroach the open space, wetlands that causes environmental degradation. Government has no or limited control over these due to poor government policy and management, political pressure and lack of resource management. Unfortunately, over centralization and bureaucracy creates unnecessary delay and interruptions in any government initiations. There is also no coordination between government and private sector developer to solve the problem of urban Poor. To understand the problem of these huge populations this paper analyzes one of the single largest slum areas in Dhaka, Korail Slum. The study focuses on socio demographic analysis, morphological pattern and role of different actors responsible for the improvements of the area and recommended some possible steps for determining the potential outcomes.

Keywords: Demographic analysis, environmental degradation, physical condition, government policy, housing and land management policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
2582 Multivariable Control of Smart Timoshenko Beam Structures Using POF Technique

Authors: T.C. Manjunath, B. Bandyopadhyay

Abstract:

Active Vibration Control (AVC) is an important problem in structures. One of the ways to tackle this problem is to make the structure smart, adaptive and self-controlling. The objective of active vibration control is to reduce the vibration of a system by automatic modification of the system-s structural response. This paper features the modeling and design of a Periodic Output Feedback (POF) control technique for the active vibration control of a flexible Timoshenko cantilever beam for a multivariable case with 2 inputs and 2 outputs by retaining the first 2 dominant vibratory modes using the smart structure concept. The entire structure is modeled in state space form using the concept of piezoelectric theory, Timoshenko beam theory, Finite Element Method (FEM) and the state space techniques. Simulations are performed in MATLAB. The effect of placing the sensor / actuator at 2 finite element locations along the length of the beam is observed. The open loop responses, closed loop responses and the tip displacements with and without the controller are obtained and the performance of the smart system is evaluated for active vibration control.

Keywords: Smart structure, Timoshenko theory, Euler-Bernoulli theory, Periodic output feedback control, Finite Element Method, State space model, Vibration control, Multivariable system, Linear Matrix Inequality

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2322
2581 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products

Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad

Abstract:

The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.

Keywords: Alumina-coated magnetite nanoparticles, magnetic mixed hemimicell solid-phase extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1212
2580 A New Fast Skin Color Detection Technique

Authors: Tarek M. Mahmoud

Abstract:

Skin color can provide a useful and robust cue for human-related image analysis, such as face detection, pornographic image filtering, hand detection and tracking, people retrieval in databases and Internet, etc. The major problem of such kinds of skin color detection algorithms is that it is time consuming and hence cannot be applied to a real time system. To overcome this problem, we introduce a new fast technique for skin detection which can be applied in a real time system. In this technique, instead of testing each image pixel to label it as skin or non-skin (as in classic techniques), we skip a set of pixels. The reason of the skipping process is the high probability that neighbors of the skin color pixels are also skin pixels, especially in adult images and vise versa. The proposed method can rapidly detect skin and non-skin color pixels, which in turn dramatically reduce the CPU time required for the protection process. Since many fast detection techniques are based on image resizing, we apply our proposed pixel skipping technique with image resizing to obtain better results. The performance evaluation of the proposed skipping and hybrid techniques in terms of the measured CPU time is presented. Experimental results demonstrate that the proposed methods achieve better result than the relevant classic method.

Keywords: Adult images filtering, image resizing, skin color detection, YcbCr color space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4010
2579 Noise Source Identification on Urban Construction Sites Using Signal Time Delay Analysis

Authors: Balgaisha G. Mukanova, Yelbek B. Utepov, Aida G. Nazarova, Alisher Z. Imanov

Abstract:

The problem of identifying local noise sources on a construction site using a sensor system is considered. Mathematical modeling of detected signals on sensors was carried out, considering signal decay and signal delay time between the source and detector. Recordings of noises produced by construction tools were used as a dependence of noise on time. Synthetic sensor data was constructed based on these data, and a model of the propagation of acoustic waves from a point source in the three-dimensional space was applied. All sensors and sources are assumed to be located in the same plane. A source localization method is checked based on the signal time delay between two adjacent detectors and plotting the direction of the source. Based on the two direct lines' crossline, the noise source's position is determined. Cases of one dominant source and the case of two sources in the presence of several other sources of lower intensity are considered. The number of detectors varies from three to eight detectors. The intensity of the noise field in the assessed area is plotted. The signal of a two-second duration is considered. The source is located for subsequent parts of the signal with a duration above 0.04 sec; the final result is obtained by computing the average value.

Keywords: Acoustic model, direction of arrival, inverse source problem, sound localization, urban noises.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 89
2578 Geometrically Non-Linear Axisymmetric Free Vibrations of Thin Isotropic Annular Plates

Authors: Boutahar Lhoucine, El Bikri Khalid, Benamar Rhali

Abstract:

The effects of large vibration amplitudes on the first axisymetric mode shape of thin isotropic annular plates having both edges clamped are examined in this paper. The theoretical model based on Hamilton’s principle and spectral analysis by using a basis of Bessel’s functions is adapted اhere to the case of annular plates. The model effectively reduces the large amplitude free vibration problem to the solution of a set of non-linear algebraic equations.

The governing non-linear eigenvalue problem has been linearised in the neighborhood of each resonance and a new one-step iterative technique has been proposed as a simple alternative method of solution to determine the basic function contributions to the non-linear mode shape considered.

Numerical results are given for the first non-linear mode shape for a wide range of vibration amplitudes. For each value of the vibration amplitude considered, the corresponding contributions of the basic functions defining the non-linear transverse displacement function and the associated non-linear frequency, the membrane and bending stress distributions are given. By comparison with the iterative method of solution, it was found that the present procedure is efficient for a wide range of vibration amplitudes, up to at least 1.8 times the plate thickness,

Keywords: Non-linear vibrations, Annular plates, Large vibration amplitudes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2281
2577 Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results

Authors: C. Villegas-Quezada, J. Climent

Abstract:

Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.

Keywords: AdaBoost Classifier, Holistic Face Recognition, Minimax Multivariate Approximation, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503
2576 Effects of Signaling on the Performance of Directed Diffusion Routing Protocol

Authors: Apidet Booranawong

Abstract:

In an original directed diffusion routing protocol, a sink requests sensing data from a source node by flooding interest messages to the network. Then, the source finds the sink by sending exploratory data messages to all nodes that generate incoming interest messages. This protocol signaling can cause heavy traffic in the network, an interference of the radio signal, collisions, great energy consumption of sensor nodes, etc. According to this research problem, this paper investigates the effect of sending interest and exploratory data messages on the performance of directed diffusion routing protocol. We demonstrate the research problem occurred from employing directed diffusion protocol in mobile wireless environments. For this purpose, we perform a set of experiments by using NS2 (network simulator 2). The radio propagation models; Two-ray ground reflection with and without shadow fading are included to investigate the effect of signaling. The simulation results show that the number of times of sent and received protocol signaling in the case of sending interest and exploratory data messages are larger than the case of sending other protocol signals, especially in the case of shadowing model. Additionally, the number of exploratory data message is largest in one round of the protocol procedure.

Keywords: Directed diffusion, Flooding, Interest message, Exploratory data message, Radio propagation model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
2575 On a New Nonlinear Sum-difference Inequality with Application

Authors: Kelong Zheng, Shouming Zhong

Abstract:

A new nonlinear sum-difference inequality in two variables which generalize some existing results and can be used as handy tools in the analysis of certain partial difference equation is discussed. An example to show boundedness of solutions of a difference value problem is also given.

Keywords: Sum-Difference inequality, Nonlinear, Boundedness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1134
2574 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: Deflagration, Large Eddy Simulation, Turbulent combustion, Vented enclosure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
2573 Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time

Authors: Anukriti Kumar, Tanmay Singh, Dinesh Kumar Vishwakarma

Abstract:

Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures.

Keywords: Multiclass classification, convolution neural network, OpenCV, Data Augmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 822
2572 Resource Leveling Optimization in Construction Projects of High Voltage Substations Using Nature-Inspired Intelligent Evolutionary Algorithms

Authors: Dimitrios Ntardas, Alexandros Tzanetos, Georgios Dounias

Abstract:

High Voltage Substations (HVS) are the intermediate step between production of power and successfully transmitting it to clients, making them one of the most important checkpoints in power grids. Nowadays - renewable resources and consequently distributed generation are growing fast, the construction of HVS is of high importance both in terms of quality and time completion so that new energy producers can quickly and safely intergrade in power grids. The resources needed, such as machines and workers, should be carefully allocated so that the construction of a HVS is completed on time, with the lowest possible cost (e.g. not spending additional cost that were not taken into consideration, because of project delays), but in the highest quality. In addition, there are milestones and several checkpoints to be precisely achieved during construction to ensure the cost and timeline control and to ensure that the percentage of governmental funding will be granted. The management of such a demanding project is a NP-hard problem that consists of prerequisite constraints and resource limits for each task of the project. In this work, a hybrid meta-heuristic method is implemented to solve this problem. Meta-heuristics have been proven to be quite useful when dealing with high-dimensional constraint optimization problems. Hybridization of them results in boost of their performance.

Keywords: High voltage substations, nature-inspired algorithms, project management, meta-heuristics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1226
2571 A Two-Step, Temperature-Staged Direct Coal Liquefaction Process

Authors: Reyna Singh, David Lokhat, Milan Carsky

Abstract:

The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal remains an abundant resource. The aim of this work was to produce a high value hydrocarbon liquid product using a Direct Coal Liquefaction (DCL) process at, relatively mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated in a dual reactor lab-scale pilot plant facility. The objectives included maximising thermal dissolution of the coal in the presence of tetralin as the hydrogen donor solvent in the first stage with 2:1 and 3:1 solvent: coal ratios. Subsequently, in the second stage, hydrogen saturation, in particular, hydrodesulphurization (HDS) performance was assessed. Two commercial hydrotreating catalysts were investigated viz. NickelMolybdenum (Ni-Mo) and Cobalt-Molybdenum (Co-Mo). GC-MS results identified 77 compounds and various functional groups present in the first and second stage liquid product. In the first stage 3:1 ratios and liquid product yields catalysed by magnetite were favoured. The second stage product distribution showed an increase in the BTX (Benzene, Toluene, Xylene) quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, Ni-Mo had an improved performance over Co-Mo. Co-Mo is selective to a higher concentration of cyclohexane. For 16 days on stream each, Ni-Mo had a higher activity than Co-Mo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated. 

Keywords: Catalyst, coal, liquefaction, temperature-staged.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1642
2570 Deep Reinforcement Learning Approach for Trading Automation in the Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

Deep Reinforcement Learning (DRL) algorithms can scale to previously intractable problems. The automation of profit generation in the stock market is possible using DRL, by combining  the financial assets price ”prediction” step and the ”allocation” step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. This work represents a DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem as a Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. We then solved the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm and achieved a 2.68 Sharpe ratio on the test dataset. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of DRL in financial markets over other types of machine learning and proves its credibility and advantages of strategic decision-making.

Keywords: Autonomous agent, deep reinforcement learning, MDP, sentiment analysis, stock market, technical indicators, twin delayed deep deterministic policy gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 531
2569 Note on the Necessity of the Patch Test

Authors: Rado Flajs, Miran Saje

Abstract:

We present a simple nonconforming approximation of the linear two–point boundary value problem which violates patch test requirements. Nevertheless the solutions, obtained from these type of approximations, converge to the exact solution.

Keywords: Generalized patch test, Irons' patch test, nonconforming finite element, convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554
2568 Processing the Medical Sensors Signals Using Fuzzy Inference System

Authors: S. Bouharati, I. Bouharati, C. Benzidane, F. Alleg, M. Belmahdi

Abstract:

Sensors possess several properties of physical measures. Whether devices that convert a sensed signal into an electrical signal, chemical sensors and biosensors, thus all these sensors can be considered as an interface between the physical and electrical equipment. The problem is the analysis of the multitudes of saved settings as input variables. However, they do not all have the same level of influence on the outputs. In order to identify the most sensitive parameters, those that can guide users in gathering information on the ground and in the process of model calibration and sensitivity analysis for the effect of each change made. Mathematical models used for processing become very complex. In this paper a fuzzy rule-based system is proposed as a solution for this problem. The system collects the available signals information from sensors. Moreover, the system allows the study of the influence of the various factors that take part in the decision system. Since its inception fuzzy set theory has been regarded as a formalism suitable to deal with the imprecision intrinsic to many problems. At the same time, fuzzy sets allow to use symbolic models. In this study an example was applied for resolving variety of physiological parameters that define human health state. The application system was done for medical diagnosis help. The inputs are the signals expressed the cardiovascular system parameters, blood pressure, Respiratory system paramsystem was done, it will be able to predict the state of patient according any input values.

Keywords: Sensors, Sensivity, fuzzy logic, analysis, physiological parameters, medical diagnosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
2567 Multi-Stage Multi-Period Production Planning in Wire and Cable Industry

Authors: Mahnaz Hosseinzadeh, Shaghayegh Rezaee Amiri

Abstract:

This paper presents a methodology for serial production planning problem in wire and cable manufacturing process that addresses the problem of input-output imbalance in different consecutive stations, hoping to minimize the halt of machines in each stage. To this end, a linear Goal Programming (GP) model is developed, in which four main categories of constraints as per the number of runs per machine, machines’ sequences, acceptable inventories of machines at the end of each period, and the necessity of fulfillment of the customers’ orders are considered. The model is formulated based upon on the real data obtained from IKO TAK Company, an important supplier of wire and cable for oil and gas and automotive industries in Iran. By solving the model in GAMS software the optimal number of runs, end-of-period inventories, and the possible minimum idle time for each machine are calculated. The application of the numerical results in the target company has shown the efficiency of the proposed model and the solution in decreasing the lead time of the end product delivery to the customers by 20%. Accordingly, the developed model could be easily applied in wire and cable companies for the aim of optimal production planning to reduce the halt of machines in manufacturing stages.

Keywords: Serial manufacturing process, production planning, wire and cable industry, goal programming approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 935
2566 The Game of Col on Complete K-ary Trees

Authors: Alessandro Cincotti, Timothee Bossart

Abstract:

Col is a classic combinatorial game played on graphs and to solve a general instance is a PSPACE-complete problem. However, winning strategies can be found for some specific graph instances. In this paper, the solution of Col on complete k-ary trees is presented.

Keywords: Combinatorial game, Complete k-ary tree, Mapcoloring game.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1193
2565 The Impact of Transaction Costs on Rebalancing an Investment Portfolio in Portfolio Optimization

Authors: B. Marasović, S. Pivac, S. V. Vukasović

Abstract:

Constructing a portfolio of investments is one of the most significant financial decisions facing individuals and institutions. In accordance with the modern portfolio theory maximization of return at minimal risk should be the investment goal of any successful investor. In addition, the costs incurred when setting up a new portfolio or rebalancing an existing portfolio must be included in any realistic analysis. In this paper rebalancing an investment portfolio in the presence of transaction costs on the Croatian capital market is analyzed. The model applied in the paper is an extension of the standard portfolio mean-variance optimization model in which transaction costs are incurred to rebalance an investment portfolio. This model allows different costs for different securities, and different costs for buying and selling. In order to find efficient portfolio, using this model, first, the solution of quadratic programming problem of similar size to the Markowitz model, and then the solution of a linear programming problem have to be found. Furthermore, in the paper the impact of transaction costs on the efficient frontier is investigated. Moreover, it is shown that global minimum variance portfolio on the efficient frontier always has the same level of the risk regardless of the amount of transaction costs. Although efficient frontier position depends of both transaction costs amount and initial portfolio it can be concluded that extreme right portfolio on the efficient frontier always contains only one stock with the highest expected return and the highest risk.

Keywords: Croatian capital market, Fractional quadratic programming, Markowitz model, Portfolio optimization, Transaction costs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2951
2564 Unsteady Poiseuille Flow of an Incompressible Elastico-Viscous Fluid in a Tube of Spherical Cross Section on a Porous Boundary

Authors: Sanjay Baburao Kulkarni

Abstract:

Exact solution of an unsteady flow of elastico-viscous fluid through a porous media in a tube of spherical cross section under the influence of constant pressure gradient has been obtained in this paper. Initially, the flow is generated by a constant pressure gradient. After attaining the steady state, the pressure gradient is suddenly withdrawn and the resulting fluid motion in a tube of spherical cross section by taking into account of the porosity factor of the bounding surface is investigated. The problem is solved in twostages the first stage is a steady motion in tube under the influence of a constant pressure gradient, the second stage concern with an unsteady motion. The problem is solved employing separation of variables technique. The results are expressed in terms of a nondimensional porosity parameter (K) and elastico-viscosity parameter (β), which depends on the Non-Newtonian coefficient. The flow parameters are found to be identical with that of Newtonian case as elastic-viscosity parameter tends to zero and porosity tends to infinity. It is seen that the effect of elastico-viscosity parameter, porosity parameter of the bounding surface has significant effect on the velocity parameter.

Keywords: Elastico-viscous fluid, Porous media, Second order fluids, Spherical cross-section.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2119
2563 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance

Authors: Loai AbdAllah, Mahmoud Kaiyal

Abstract:

Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.

Keywords: Missing values, distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 782
2562 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks

Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz

Abstract:

Small cell deployment in 5G networks is a promising technology to enhance the capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision problem using Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), and propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting policy, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method show better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.

Keywords: Handover, HetNets, interference, MADM, small cells, TOPSIS, weight.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 581
2561 Safe, Effective, and Cost-Efficient Air Cleaning for Populated Rooms and Entire Buildings Based on the Disinfecting Power of Vaporized Hypochlorous Acid

Authors: D. Boecker, R. Breves, F. Herth, Z. Zhang, C. Bulitta

Abstract:

Pathogen-carrying aerosol particles are recognized as important infection carriers like those in the current Corona pandemic. This infection route is often underestimated yet represents the infection route that has been least systematically countered to date. Particularly, the transmission indoors is of the highest concern but current indoor safety measures (e.g.: distancing, masks, filters) provide only limited protection. Inhalation of hypochlorous acid (HOCl) containing aerosols may become an alternate route to attack the incubating microbes in-situ and so potentially lead to a reduction of symptoms of already infected individuals. We investigated a facility-wide air-disinfection concept utilizing the potential of vaporized HOCl to become a disinfecting agent for populated indoor atmospheres. Aerosolized bacterial microbes were used as surrogates for a viral contamination, particularly the enveloped coronavirus. For the room air purification tests we aerosolized bacterial suspensions into lab chambers preloaded with vaporized HOCl solutions. Concentration of ‘free active chlorine’ in the test chamber atmosphere was determined with a special gas sensor system (Draeger AG, Lübeck, Germany) controlling the amount of vaporized HOCl via an aerosolis® device (oji Europe GmbH, Nauen, Germany). We could confirm the disinfecting power of HOCl in suspensions and determined the high efficacy of vaporized HOCl to disinfect atmospheres of populated indoor places at safe and non-irritant levels.

Keywords: Hypochlorous acid, HOCl, indoor air cleaning, infection control, microbial air burden, protective atmosphere.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 438