Search results for: Document color modeling
1822 A Review on Enhanced Dynamic Clustering in WSN
Authors: M. Sangeetha, A. Sabari, K. Elakkiya
Abstract:
Recent advancement in wireless internetworking has presented a number of dynamic routing protocols based on sensor networks. At present, a number of revisions are made based on their energy efficiency, lifetime and mobility. However, to the best of our knowledge no extensive survey of this special type has been prepared. At present, review is needed in this area where cluster-based structures for dynamic wireless networks are to be discussed. In this paper, we examine and compare several aspects and characteristics of some extensively explored hierarchical dynamic clustering protocols in wireless sensor networks. This document also presents a discussion on the future research topics and the challenges of dynamic hierarchical clustering in wireless sensor networks.Keywords: Dynamic cluster, Hierarchical clustering, Wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13761821 An Efficient Clustering Technique for Copy-Paste Attack Detection
Authors: N. Chaitawittanun, M. Munlin
Abstract:
Due to rapid advancement of powerful image processing software, digital images are easy to manipulate and modify by ordinary people. Lots of digital images are edited for a specific purpose and more difficult to distinguish form their original ones. We propose a clustering method to detect a copy-move image forgery of JPEG, BMP, TIFF, and PNG. The process starts with reducing the color of the photos. Then, we use the clustering technique to divide information of measuring data by Hausdorff Distance. The result shows that the purposed methods is capable of inspecting the image file and correctly identify the forgery.
Keywords: Image detection, forgery image, copy-paste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13201820 Tension Stiffening Parameter in Composite Concrete Reinforced with Inoxydable Steel: Laboratory and Finite Element Analysis
Abstract:
In the present work, behavior of inoxydable steel as reinforcement bar in composite concrete is being investigated. The bar-concrete adherence in reinforced concrete (RC) beam is studied and focus is made on the tension stiffening parameter. This study highlighted an approach to observe this interaction behavior in bending test instead of direct tension as per reported in many references. The approach resembles actual loading condition of the structural RC beam. The tension stiffening properties are then applied to numerical finite element analysis (FEA) to verify their correlation with laboratory results. Comparison with laboratory shows a good correlation between the two. The experimental settings is able to determine tension stiffening parameters in RC beam and the modeling strategies made in ABAQUS can closely represent the actual condition. Tension stiffening model used can represent the interaction properties between inoxydable steel and concrete.Keywords: Inoxydable steel, Finite element modeling, Reinforced concrete beam, Tension-stiffening.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42971819 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping
Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa
Abstract:
The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.
Keywords: Neural network computing, information processing, input-output mapping, training time, computers with high memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13241818 Modeling and Simulation of Acoustic Link Using Mackenize Propagation Speed Equation
Authors: Christhu Raj M. R., Rajeev Sukumaran
Abstract:
Underwater acoustic networks have attracted great attention in the last few years because of its numerous applications. High data rate can be achieved by efficiently modeling the physical layer in the network protocol stack. In Acoustic medium, propagation speed of the acoustic waves is dependent on many parameters such as temperature, salinity, density, and depth. Acoustic propagation speed cannot be modeled using standard empirical formulas such as Urick and Thorp descriptions. In this paper, we have modeled the acoustic channel using real time data of temperature, salinity, and speed of Bay of Bengal (Indian Coastal Region). We have modeled the acoustic channel by using Mackenzie speed equation and real time data obtained from National Institute of Oceanography and Technology. It is found that acoustic propagation speed varies between 1503 m/s to 1544 m/s as temperature and depth differs. The simulation results show that temperature, salinity, depth plays major role in acoustic propagation and data rate increases with appropriate data sets substituted in the simulated model.Keywords: Underwater Acoustics, Mackenzie Speed Equation, Temperature, Salinity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21991817 Collision Detection Algorithm Based on Data Parallelism
Authors: Zhen Peng, Baifeng Wu
Abstract:
Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.
Keywords: Data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12361816 The Application of FSI Techniques in Modeling of Realist Pulmonary Systems
Authors: Abdurrahim Bolukbasi, Hassan Athari, Dogan Ciloglu
Abstract:
The modeling lung respiratory system that has complex anatomy and biophysics presents several challenges including tissue-driven flow patterns and wall motion. Also, the pulmonary lung system because of that they stretch and recoil with each breath, has not static walls and structures. The direct relationship between air flow and tissue motion in the lung structures naturally prefers an FSI simulation technique. Therefore, in order to toward the realistic simulation of pulmonary breathing mechanics the development of a coupled FSI computational model is an important step. A simple but physiologically relevant three-dimensional deep long geometry is designed and fluid-structure interaction (FSI) coupling technique is utilized for simulating the deformation of the lung parenchyma tissue that produces airflow fields. The real understanding of respiratory tissue system as a complex phenomenon have been investigated with respect to respiratory patterns, fluid dynamics and tissue viscoelasticity and tidal breathing period.
Keywords: Lung deformation and mechanics, tissue mechanics, viscoelasticity, fluid-structure interactions, ANSYS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23281815 Encryption Image via Mutual Singular Value Decomposition
Authors: Adil Al-Rammahi
Abstract:
Image or document encryption is needed through egovernment data base. Really in this paper we introduce two matrices images, one is the public, and the second is the secret (original). The analyses of each matrix is achieved using the transformation of singular values decomposition. So each matrix is transformed or analyzed to three matrices say row orthogonal basis, column orthogonal basis, and spectral diagonal basis. Product of the two row basis is calculated. Similarly the product of the two column basis is achieved. Finally we transform or save the files of public, row product and column product. In decryption stage, the original image is deduced by mutual method of the three public files.
Keywords: Image cryptography, Singular values decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20861814 Operational Risk – Scenario Analysis
Authors: Milan Rippel, Petr Teply
Abstract:
This paper focuses on operational risk measurement techniques and on economic capital estimation methods. A data sample of operational losses provided by an anonymous Central European bank is analyzed using several approaches. Loss Distribution Approach and scenario analysis method are considered. Custom plausible loss events defined in a particular scenario are merged with the original data sample and their impact on capital estimates and on the financial institution is evaluated. Two main questions are assessed – What is the most appropriate statistical method to measure and model operational loss data distribution? and What is the impact of hypothetical plausible events on the financial institution? The g&h distribution was evaluated to be the most suitable one for operational risk modeling. The method based on the combination of historical loss events modeling and scenario analysis provides reasonable capital estimates and allows for the measurement of the impact of extreme events on banking operations.Keywords: operational risk, scenario analysis, economic capital, loss distribution approach, extreme value theory, stress testing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24321813 Combining the Description Features of UMLRT and CSP+T Specifications Applied to a Complete Design of Real-Time Systems
Authors: Kawtar Benghazi Akhlaki, Manuel I. Capel-Tuñón
Abstract:
UML is a collection of notations for capturing a software system specification. These notations have a specific syntax defined by the Object Management Group (OMG), but many of their constructs only present informal semantics. They are primarily graphical, with textual annotation. The inadequacies of standard UML as a vehicle for complete specification and implementation of real-time embedded systems has led to a variety of competing and complementary proposals. The Real-time UML profile (UML-RT), developed and standardized by OMG, defines a unified framework to express the time, scheduling and performance aspects of a system. We present in this paper a framework approach aimed at deriving a complete specification of a real-time system. Therefore, we combine two methods, a semiformal one, UML-RT, which allows the visual modeling of a realtime system and a formal one, CSP+T, which is a design language including the specification of real-time requirements. As to show the applicability of the approach, a correct design of a real-time system with hard real time constraints by applying a set of mapping rules is obtained.
Keywords: CSP+T, formal software specification, process algebras, real-time systems, unified modeling language.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18091812 Development of Manufacturing Simulation Model for Semiconductor Fabrication
Authors: Syahril Ridzuan Ab Rahim, Ibrahim Ahmad, Mohd Azizi Chik, Ahmad Zafir Md. Rejab, and U. Hashim
Abstract:
This research presents the development of simulation modeling for WIP management in semiconductor fabrication. Manufacturing simulation modeling is needed for productivity optimization analysis due to the complex process flows involved more than 35 percent re-entrance processing steps more than 15 times at same equipment. Furthermore, semiconductor fabrication required to produce high product mixed with total processing steps varies from 300 to 800 steps and cycle time between 30 to 70 days. Besides the complexity, expansive wafer cost that potentially impact the company profits margin once miss due date is another motivation to explore options to experiment any analysis using simulation modeling. In this paper, the simulation model is developed using existing commercial software platform AutoSched AP, with customized integration with Manufacturing Execution Systems (MES) and Advanced Productivity Family (APF) for data collections used to configure the model parameters and data source. Model parameters such as processing steps cycle time, equipment performance, handling time, efficiency of operator are collected through this customization. Once the parameters are validated, few customizations are made to ensure the prior model is executed. The accuracy for the simulation model is validated with the actual output per day for all equipments. The comparison analysis from result of the simulation model compared to actual for achieved 95 percent accuracy for 30 days. This model later was used to perform various what if analysis to understand impacts on cycle time and overall output. By using this simulation model, complex manufacturing environment like semiconductor fabrication (fab) now have alternative source of validation for any new requirements impact analysis.Keywords: Advanced Productivity Family (APF), Complementary Metal Oxide Semiconductor (CMOS), Manufacturing Execution Systems (MES), Work In Progress (WIP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32231811 Scenario Analysis of Indonesia's Energy Security by using a System-Dynamics Approach
Authors: Yudha Prambudia, Masaru Nakano
Abstract:
Due to rapid economic growth, Indonesia's energy needs is rapidly increasing. Indonesia-s primary energy consumption has doubled in 2007 compared to 2003. Indonesia's status change from oil net-exporter to oil net-importer country recently has increased Indonesia's concern over energy security. Due to this, oil import becomes center of attention in the dynamics of Indonesia's energy security. Conventional studies addressing Indonesia's energy security have focused on energy production sector. This study explores Indonesia-s energy security considering energy import sector by modeling and simulating Indonesia-s energy-related policies using system dynamics. Simulation result of Indonesia's energy security in 2020 in Business-As-Usual scenario shows that in term of supply demand ratio, energy security will be very high, but also it poses high dependence on energy import. The Alternative scenario result shows lower energy security in term of supply demand ratio and much lower dependence on energy import. It is also found that the Alternative scenario produce lower GDP growth.
Keywords: Energy security, modeling, simulation, system dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21281810 Unsupervised Image Segmentation Based on Fuzzy Connectedness with Sale Space Theory
Authors: Yuanjie Zheng, Jie Yang, Yue Zhou
Abstract:
In this paper, we propose an approach of unsupervised segmentation with fuzzy connectedness. Valid seeds are first specified by an unsupervised method based on scale space theory. A region is then extracted for each seed with a relative object extraction method of fuzzy connectedness. Afterwards, regions are merged according to the values between them of an introduced measure. Some theorems and propositions are also provided to show the reasonableness of the measure for doing mergence. Experiment results on a synthetic image, a color image and a large amount of MR images of our method are reported.Keywords: Image segmentation, unsupervised imagesegmentation, fuzzy connectedness, scale space.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13421809 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling
Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow
Abstract:
Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.Keywords: Dynamic modeling, missing data, multiple imputation, physiological measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8131808 Prediction of Dissolved Oxygen in Rivers Using a Wang-Mendel Method – Case Study of Au Sable River
Authors: Mahmoud R. Shaghaghian
Abstract:
Amount of dissolve oxygen in a river has a great direct affect on aquatic macroinvertebrates and this would influence on the region ecosystem indirectly. In this paper it is tried to predict dissolved oxygen in rivers by employing an easy Fuzzy Logic Modeling, Wang Mendel method. This model just uses previous records to estimate upcoming values. For this purpose daily and hourly records of eight stations in Au Sable watershed in Michigan, United States are employed for 12 years and 50 days period respectively. Calculations indicate that for long period prediction it is better to increase input intervals. But for filling missed data it is advisable to decrease the interval. Increasing partitioning of input and output features influence a little on accuracy but make the model too time consuming. Increment in number of input data also act like number of partitioning. Large amount of train data does not modify accuracy essentially, so, an optimum training length should be selected.
Keywords: Dissolved oxygen, Au Sable, fuzzy logic modeling, Wang Mendel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18931807 Improvement of Central Composite Design in Modeling and Optimization of Simulation Experiments
Authors: A. Nuchitprasittichai, N. Lerdritsirikoon, T. Khamsing
Abstract:
Simulation modeling can be used to solve real world problems. It provides an understanding of a complex system. To develop a simplified model of process simulation, a suitable experimental design is required to be able to capture surface characteristics. This paper presents the experimental design and algorithm used to model the process simulation for optimization problem. The CO2 liquefaction based on external refrigeration with two refrigeration circuits was used as a simulation case study. Latin Hypercube Sampling (LHS) was purposed to combine with existing Central Composite Design (CCD) samples to improve the performance of CCD in generating the second order model of the system. The second order model was then used as the objective function of the optimization problem. The results showed that adding LHS samples to CCD samples can help capture surface curvature characteristics. Suitable number of LHS sample points should be considered in order to get an accurate nonlinear model with minimum number of simulation experiments.Keywords: Central composite design, CO2 liquefaction, Latin Hypercube Sampling, simulation – based optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7431806 Increased Capacity of Information Hiding in LSB-s Method for Text and Image
Authors: H.B.Kekre, Archana Athawale, Pallavi N.Halarnkar
Abstract:
Steganography, derived from Greek, literally means “covered writing". It includes a vast array of secret communications methods that conceal the message-s very existence. These methods include invisible inks, microdots, character arrangement, digital signatures, covert channels, and spread spectrum communications. This paper proposes a new improved version of Least Significant Bit (LSB) method. The approach proposed is simple for implementation when compared to Pixel value Differencing (PVD) method and yet achieves a High embedding capacity and imperceptibility. The proposed method can also be applied to 24 bit color images and achieve embedding capacity much higher than PVD.Keywords: Information Hiding, LSB Matching, PVD Steganography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31691805 ReSeT : Reverse Engineering System Requirements Tool
Authors: Rosziati Ibrahim, Tiu Kian Yong
Abstract:
Reverse Engineering is a very important process in Software Engineering. It can be performed backwards from system development life cycle (SDLC) in order to get back the source data or representations of a system through analysis of its structure, function and operation. We use reverse engineering to introduce an automatic tool to generate system requirements from its program source codes. The tool is able to accept the Cµ programming source codes, scan the source codes line by line and parse the codes to parser. Then, the engine of the tool will be able to generate system requirements for that specific program to facilitate reuse and enhancement of the program. The purpose of producing the tool is to help recovering the system requirements of any system when the system requirements document (SRD) does not exist due to undocumented support of the system.Keywords: System Requirements, Reverse Engineering, SourceCodes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16761804 Elemental Graph Data Model: A Semantic and Topological Representation of Building Elements
Authors: Yasmeen A. S. Essawy, Khaled Nassar
Abstract:
With the rapid increase of complexity in the building industry, professionals in the A/E/C industry were forced to adopt Building Information Modeling (BIM) in order to enhance the communication between the different project stakeholders throughout the project life cycle and create a semantic object-oriented building model that can support geometric-topological analysis of building elements during design and construction. This paper presents a model that extracts topological relationships and geometrical properties of building elements from an existing fully designed BIM, and maps this information into a directed acyclic Elemental Graph Data Model (EGDM). The model incorporates BIM-based search algorithms for automatic deduction of geometrical data and topological relationships for each building element type. Using graph search algorithms, such as Depth First Search (DFS) and topological sortings, all possible construction sequences can be generated and compared against production and construction rules to generate an optimized construction sequence and its associated schedule. The model is implemented in a C# platform.
Keywords: Building information modeling, elemental graph data model, geometric and topological data models, and graph theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12041803 Lack of BIM Training: Investigating Practical Solutions for the State of Kuwait
Authors: Noor M. Abdulfattah, Ahmed M. Khalafallah, Nabil A. Kartam
Abstract:
Despite the evident benefits of building information modeling (BIM) to the construction industry, it faces significant implementation challenges in the State of Kuwait. This study investigates the awareness of construction stakeholders of BIM implementation challenges, and identifies various solutions to overcome these challenges. Specifically, the main objectives of this study are to: (1) characterize the barriers that deter utilization of BIM, (2) examine the awareness of engineers, architects, and construction stakeholders of these barriers, and (3) identify practical solutions to facilitate BIM utilization. A questionnaire survey was designed to collect data on the aforementioned objectives from local companies and senior BIM experts. It was found that engineers are highly aware of BIM implementation barriers. In addition, it was concluded from the questionnaire that the biggest barrier is the lack of BIM training. Based on expert feedback, the study concluded with a number of recommendations on how to overcome the barriers of BIM utilization. This should prove useful to the construction industry stakeholders and can lead to significant changes to design and construction practices.
Keywords: Building information modeling, construction, challenges, information technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24771802 Application of 1-MCP on ‘Centro’ Melon at Different Days after Harvest
Authors: L. P. L. Nguyen, G. Hitka, T. Zsom, Z. Kókai
Abstract:
This study is aimed to investigate the influence of postharvest delays of 1-Methylcyclopropene (1-MCP) treatment on prolonging the storage potential of melon. Melons were treated with 625-650 ppb 1-MCP at 10 °C for 24 hours on the 1st, 3rd and 5th day after harvest. Decreased ethylene production and retarded softening of melon fruits after 7 days of storage at 10 °C plus 3 days of shelflife were obtained by 1-MCP applications. 1-MCP strongly affected the chlorophyll fluorescence characteristics and hue angle values of melon. After shelf-life, the peel color of treated melon was slow in turning to yellow compared to the control. Additionally, firmness of melons treated on the first day after harvest was 38% higher than that of the control fruit. Results showed that fruits treated on the 1st and the 3rd day after harvest could maintain the quality of melon.Keywords: 1-MCP, delay, muskmelon, storage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14181801 Systems Engineering and Project Management Process Modeling in the Aeronautics Context: Case Study of SMEs
Authors: S. Lemoussu, J. C. Chaudemar, R. A. Vingerhoeds
Abstract:
The aeronautics sector is currently living an unprecedented growth largely due to innovative projects. In several cases, such innovative developments are being carried out by Small and Medium sized-Enterprises (SMEs). For instance, in Europe, a handful of SMEs are leading projects like airships, large civil drones, or flying cars. These SMEs have all limited resources, must make strategic decisions, take considerable financial risks and in the same time must take into account the constraints of safety, cost, time and performance as any commercial organization in this industry. Moreover, today, no international regulations fully exist for the development and certification of this kind of projects. The absence of such a precise and sufficiently detailed regulatory framework requires a very close contact with regulatory instances. But, SMEs do not always have sufficient resources and internal knowledge to handle this complexity and to discuss these issues. This poses additional challenges for those SMEs that have system integration responsibilities and that must provide all the necessary means of compliance to demonstrate their ability to design, produce, and operate airships with the expected level of safety and reliability. The final objective of our research is thus to provide a methodological framework supporting SMEs in their development taking into account recent innovation and institutional rules of the sector. We aim to provide a contribution to the problematic by developing a specific Model-Based Systems Engineering (MBSE) approach. Airspace regulation, aeronautics standards and international norms on systems engineering are taken on board to be formalized in a set of models. This paper presents the on-going research project combining Systems Engineering and Project Management process modeling and taking into account the metamodeling problematic.
Keywords: Aeronautics, certification, process modeling, project management, SME, systems engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14361800 Methodology of Realization for Supervisor and Simulator Dedicated to a Semiconductor Research and Production Factory
Authors: Hanane Ondella, Pierre Ladet, David Ferrand, Pat Sloan
Abstract:
In the micro and nano-technology industry, the «clean-rooms» dedicated to manufacturing chip, are equipped with the most sophisticated equipment-tools. There use a large number of resources in according to strict specifications for an optimum working and result. The distribution of «utilities» to the production is assured by teams who use a supervision tool. The studies show the interest to control the various parameters of production or/and distribution, in real time, through a reliable and effective supervision tool. This document looks at a large part of the functions that the supervisor must assure, with complementary functionalities to help the diagnosis and simulation that prove very useful in our case where the supervised installations are complexed and in constant evolution.Keywords: Control-Command, evolution, non regression, performances, real time, simulation, supervision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12611799 Accurate Time Domain Method for Simulation of Microstructured Electromagnetic and Photonic Structures
Authors: Vijay Janyani, Trevor M. Benson, Ana Vukovic
Abstract:
A time-domain numerical model within the framework of transmission line modeling (TLM) is developed to simulate electromagnetic pulse propagation inside multiple microcavities forming photonic crystal (PhC) structures. The model developed is quite general and is capable of simulating complex electromagnetic problems accurately. The field quantities can be mapped onto a passive electrical circuit equivalent what ensures that TLM is provably stable and conservative at a local level. Furthermore, the circuit representation allows a high level of hybridization of TLM with other techniques and lumped circuit models of components and devices. A photonic crystal structure formed by rods (or blocks) of high-permittivity dieletric material embedded in a low-dielectric background medium is simulated as an example. The model developed gives vital spatio-temporal information about the signal, and also gives spectral information over a wide frequency range in a single run. The model has wide applications in microwave communication systems, optical waveguides and electromagnetic materials simulations.Keywords: Computational Electromagnetics, Numerical Simulation, Transmission Line Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16321798 Optimization of Petroleum Refinery Configuration Design with Logic Propositions
Authors: Cheng Seong Khor, Xiao Qi Yeoh
Abstract:
This work concerns the topological optimization problem for determining the optimal petroleum refinery configuration. We are interested in further investigating and hopefully advancing the existing optimization approaches and strategies employing logic propositions to conceptual process synthesis problems. In particular, we seek to contribute to this increasingly exciting area of chemical process modeling by addressing the following potentially important issues: (a) how the formulation of design specifications in a mixed-logical-and-integer optimization model can be employed in a synthesis problem to enrich the problem representation by incorporating past design experience, engineering knowledge, and heuristics; and (b) how structural specifications on the interconnectivity relationships by space (states) and by function (tasks) in a superstructure should be properly formulated within a mixed-integer linear programming (MILP) model. The proposed modeling technique is illustrated on a case study involving the alternative processing routes of naphtha, in which significant improvement in the solution quality is obtained.Keywords: Mixed-integer linear programming (MILP), petroleum refinery, process synthesis, superstructure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17301797 Numerical Study of Airfoils Aerodynamic Performance in Heavy Rain Environment
Authors: M. Ismail, Cao Yihua, Zhao Ming, Abu Bakar
Abstract:
Heavy rainfall greatly affects the aerodynamic performance of the aircraft. There are many accidents of aircraft caused by aerodynamic efficiency degradation by heavy rain. In this Paper we have studied the heavy rain effects on the aerodynamic efficiency of cambered NACA 64-210 and symmetric NACA 0012 airfoils. Our results show significant increase in drag and decrease in lift. We used preprocessing software gridgen for creation of geometry and mesh, used fluent as solver and techplot as postprocessor. Discrete phase modeling called DPM is used to model the rain particles using two phase flow approach. The rain particles are assumed to be inert. Both airfoils showed significant decrease in lift and increase in drag in simulated rain environment. The most significant difference between these two airfoils was the NACA 64-210 more sensitivity than NACA 0012 to liquid water content (LWC). We believe that the results showed in this paper will be useful for the designer of the commercial aircrafts and UAVs, and will be helpful for training of the pilots to control the airplanes in heavy rain.
Keywords: airfoil, discrete phase modeling, heavy rain, Reynolds
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22801796 Load Transfer Mechanism Based Unified Strut-and-Tie Modeling for Design of Concrete Beams
Authors: Ahmed, M., Yasser A., Mahmoud H., Ahmed, A., Abdulla M. S., Nazar, S.
Abstract:
Strut-and-Tie Models (STM) for the design of concrete beams, comprising of struts, ties, nodes as the basic tools, is conceptually simple, but its realization for complex concrete structure is not straightforward and depends on flow of internal forces in the structure. STM technique has won wide acceptance for deep member and shear design. STM technique is a unified approach that considers all load effects (bending, axial, shear, and torsion) simultaneously, not just applicable to shear loading only. The present study is to portray Strut-and-Tie Modeling based on Load-Transfer-Mechanisms as a unified method to analyze, design and detailing for deep and slender concrete beams. Three shear span- effective depth ratio (a/ d) are recommended for the modeling of STM elements corresponding to dominant load paths. The study also discusses the research work conduct on effective stress of concrete, tie end anchorage, and transverse reinforcement demand under different load transfer mechanism. It is also highlighted that to make the STM versatile tool for design of beams applicable to all shear spans, the effective stress of concrete and, transverse reinforcement demand, inclined angle of strut, and anchorage requirements of tie bars is required to be correlated with respect to load transfer mechanism. The country code provisions are to be modified and updated to apply for generalized design of concrete deep and slender member using load transfer mechanism based STM technique. Examples available in literature are reanalyzed with refined STM based on load transfer mechanisms and results are compared. It is concluded from the results that proposed approach will require true reinforcement demand depending on dominant force transfer action in concrete beam.
Keywords: Deep member, Load transfer mechanism, Strut-and-Tie Model, Strut, Truss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59911795 Evolutionary Feature Selection for Text Documents using the SVM
Authors: Daniel I. Morariu, Lucian N. Vintan, Volker Tresp
Abstract:
Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step, the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of feature selection methods to reduce the dimensionality of the document-representation vector. In this paper, we present three feature selection methods: Information Gain, Support Vector Machine feature selection called (SVM_FS) and Genetic Algorithm with SVM (called GA_SVM). We show that the best results were obtained with GA_SVM method for a relatively small dimension of the feature vector.Keywords: Feature Selection, Learning with Kernels, Support Vector Machine, Genetic Algorithm, and Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17061794 Feature Selection Methods for an Improved SVM Classifier
Authors: Daniel Morariu, Lucian N. Vintan, Volker Tresp
Abstract:
Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step, the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of feature selection methods to reduce the dimensionality of the document-representation vector. In this paper, three feature selection methods are evaluated: Random Selection, Information Gain (IG) and Support Vector Machine feature selection (called SVM_FS). We show that the best results were obtained with SVM_FS method for a relatively small dimension of the feature vector. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).Keywords: Feature Selection, Learning with Kernels, SupportVector Machine, and Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18301793 Optimization of Protein Hydrolysate Production Process from Jatropha curcas Cake
Authors: Waraporn Apiwatanapiwat, Pilanee Vaithanomsat, Phanu Somkliang, Taweesiri Malapant
Abstract:
This was the first document revealing the investigation of protein hydrolysate production optimization from J. curcas cake. Proximate analysis of raw material showed 18.98% protein, 5.31% ash, 8.52% moisture and 12.18% lipid. The appropriate protein hydrolysate production process began with grinding the J. curcas cake into small pieces. Then it was suspended in 2.5% sodium hydroxide solution with ratio between solution/ J. curcas cake at 80:1 (v/w). The hydrolysis reaction was controlled at temperature 50 °C in water bath for 45 minutes. After that, the supernatant (protein hydrolysate) was separated using centrifuge at 8000g for 30 minutes. The maximum yield of resulting protein hydrolysate was 73.27 % with 7.34% moisture, 71.69% total protein, 7.12% lipid, 2.49% ash. The product was also capable of well dissolving in water.Keywords: Production, protein hydrolysate, Jatropha curcas cake, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1956