Search results for: poisson noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1322

Search results for: poisson noise

212 Similar Script Character Recognition on Kannada and Telugu

Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy

Abstract:

This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.

Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN

Procedia PDF Downloads 11
211 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane

Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo

Abstract:

Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.

Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining

Procedia PDF Downloads 51
210 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 45
209 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm

Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi

Abstract:

To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.

Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm

Procedia PDF Downloads 190
208 Geospatial Curve Fitting Methods for Disease Mapping of Tuberculosis in Eastern Cape Province, South Africa

Authors: Davies Obaromi, Qin Yongsong, James Ndege

Abstract:

To interpolate scattered or regularly distributed data, there are imprecise or exact methods. However, there are some of these methods that could be used for interpolating data in a regular grid and others in an irregular grid. In spatial epidemiology, it is important to examine how a disease prevalence rates are distributed in space, and how they relate with each other within a defined distance and direction. In this study, for the geographic and graphic representation of the disease prevalence, linear and biharmonic spline methods were implemented in MATLAB, and used to identify, localize and compare for smoothing in the distribution patterns of tuberculosis (TB) in Eastern Cape Province. The aim of this study is to produce a more “smooth” graphical disease map for TB prevalence patterns by a 3-D curve fitting techniques, especially the biharmonic splines that can suppress noise easily, by seeking a least-squares fit rather than exact interpolation. The datasets are represented generally as a 3D or XYZ triplets, where X and Y are the spatial coordinates and Z is the variable of interest and in this case, TB counts in the province. This smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function, and it has also become the conventional method for its high precision, simplicity and flexibility. Surface and contour plots are produced for the TB prevalence at the provincial level for 2012 – 2015. From the results, the general outlook of all the fittings showed a systematic pattern in the distribution of TB cases in the province and this is consistent with some spatial statistical analyses carried out in the province. This new method is rarely used in disease mapping applications, but it has a superior advantage to be assessed at subjective locations rather than only on a rectangular grid as seen in most traditional GIS methods of geospatial analyses.

Keywords: linear, biharmonic splines, tuberculosis, South Africa

Procedia PDF Downloads 206
207 Evaluation of Mechanical Behavior of Laser Cladding in Various Tilting Pad Bearing Materials

Authors: Si-Geun Choi, Hoon-Jae Park, Jung-Woo Cho, Jin-Ho Lim, Jin-Young Park, Joo-Young Oh, Jae-Il Jeong Seock-Sam Kim, Young Tae Cho, Chan Gyu Kim, Jong-Hyoung Kim

Abstract:

The tilting pad bearing is a kind of the fluid film bearing and it can contribute to the high speed and the high load performance compared to other bearings including the rolling element bearing. Furthermore, the tilting bearing has many advantages such as high stability at high-speed performance, long life, high damping, high impact resistance and low noise. Therefore, it mostly used in mid to large size turbomachines, despite the high price disadvantage. Recently, manufacture and process employing laser techniques advancing at a fast-growing rate in mechanical industry, the dissimilar metal weld process employing laser techniques is actively studied. Moreover, also, Industry fields try to apply for welding the white metal and the back metal using laser cladding method for high durability. Furthermore, it has followed that laser cladding method has a lot better bond strength, toughness, anti-abrasion and environment-friendly than centrifugal casting method through preceding research. Therefore, the laser cladding method has a lot better quality, cost reduction, eco-friendliness and permanence of technology than the centrifugal casting method or the gravity casting method. In this study, we compare the mechanical properties of different bearing materials by evaluating the behavior of laser cladding layer with various materials (i.e. SS400, SCM440, S20C) under the same parameters. Furthermore, we analyze the porosity of various tilting pad bearing materials which white metal treated on samples. SEM, EDS analysis and hardness tests of three materials are shown to understand the mechanical properties and tribological behavior. W/D ratio, surface roughness results with various materials are performed in this study.

Keywords: laser cladding, tilting pad bearing, white metal, mechanical properties

Procedia PDF Downloads 349
206 Analysis of Real Time Seismic Signal Dataset Using Machine Learning

Authors: Sujata Kulkarni, Udhav Bhosle, Vijaykumar T.

Abstract:

Due to the closeness between seismic signals and non-seismic signals, it is vital to detect earthquakes using conventional methods. In order to distinguish between seismic events and non-seismic events depending on their amplitude, our study processes the data that come from seismic sensors. The authors suggest a robust noise suppression technique that makes use of a bandpass filter, an IIR Wiener filter, recursive short-term average/long-term average (STA/LTA), and Carl short-term average (STA)/long-term average for event identification (LTA). The trigger ratio used in the proposed study to differentiate between seismic and non-seismic activity is determined. The proposed work focuses on significant feature extraction for machine learning-based seismic event detection. This serves as motivation for compiling a dataset of all features for the identification and forecasting of seismic signals. We place a focus on feature vector dimension reduction techniques due to the temporal complexity. The proposed notable features were experimentally tested using a machine learning model, and the results on unseen data are optimal. Finally, a presentation using a hybrid dataset (captured by different sensors) demonstrates how this model may also be employed in a real-time setting while lowering false alarm rates. The planned study is based on the examination of seismic signals obtained from both individual sensors and sensor networks (SN). A wideband seismic signal from BSVK and CUKG station sensors, respectively located near Basavakalyan, Karnataka, and the Central University of Karnataka, makes up the experimental dataset.

Keywords: Carl STA/LTA, features extraction, real time, dataset, machine learning, seismic detection

Procedia PDF Downloads 58
205 Spatial and Time Variability of Ambient Vibration H/V Frequency Peak

Authors: N. Benkaci, E. Oubaiche, J.-L. Chatelain, R. Bensalem, K. Abbes

Abstract:

The ambient vibration H/V technique is widely used nowadays in microzonation studies, because of its easy field handling and its low cost, compared to other geophysical methods. However, in presence of complex geology or lateral heterogeneity evidenced by more than one peak frequency in the H/V curve, it is difficult to interpret the results, especially when soil information is lacking. In this work, we focus on the construction site of the Baraki 40000=place stadium, located in the north-east side of the Mitidja basin (Algeria), to identify the seismic wave amplification zones. H/V curve analysis leads to the observation of spatial and time variability of the H/V frequency peaks. The spatial variability allows dividing the studied area into three main zones: (1) one with a predominant frequency around 1,5 Hz showing an important amplification level, (2) the second exhibits two peaks at 1,5 Hz and in the 4 Hz – 10 Hz range, and (3) the third zone is characterized by a plateau between 2 Hz and 3 Hz. These H/V curve categories reveal a consequent lateral heterogeneity dividing the stadium site roughly in the middle. Furthermore, a continuous ambient vibration recording during several weeks allows showing that the first peak at 1,5 Hz in the second zone, completely disappears between 2 am and 4 am, and reaching its maximum amplitude around 12 am. Consequently, the anthropogenic noise source generating these important variations could be the Algiers Rocade Sud highway, located in the maximum amplification azimuth direction of the H/V curves. This work points out that the H/V method is an important tool to perform nano-zonation studies prior to geotechnical and geophysical investigations, and that, in some cases, the H/V technique fails to reveal the resonance frequency in the absence of strong anthropogenic source.

Keywords: ambient vibrations, amplification, fundamental frequency, lateral heterogeneity, site effect

Procedia PDF Downloads 211
204 Weight Estimation Using the K-Means Method in Steelmaking’s Overhead Cranes in Order to Reduce Swing Error

Authors: Seyedamir Makinejadsanij

Abstract:

One of the most important factors in the production of quality steel is to know the exact weight of steel in the steelmaking area. In this study, a calculation method is presented to estimate the exact weight of the melt as well as the objects transported by the overhead crane. Iran Alloy Steel Company's steelmaking area has three 90-ton cranes, which are responsible for transferring the ladles and ladle caps between 34 areas in the melt shop. Each crane is equipped with a Disomat Tersus weighing system that calculates and displays real-time weight. The moving object has a variable weight due to swinging, and the weighing system has an error of about +-5%. This means that when the object is moving by a crane, which weighs about 80 tons, the device (Disomat Tersus system) calculates about 4 tons more or 4 tons less, and this is the biggest problem in calculating a real weight. The k-means algorithm is an unsupervised clustering method that was used here. The best result was obtained by considering 3 centers. Compared to the normal average(one) or two, four, five, and six centers, the best answer is with 3 centers, which is logically due to the elimination of noise above and below the real weight. Every day, the standard weight is moved with working cranes to test and calibrate cranes. The results are shown that the accuracy is about 40 kilos per 60 tons (standard weight). As a result, with this method, the accuracy of moving weight is calculated as 99.95%. K-means is used to calculate the exact mean of objects. The stopping criterion of the algorithm is also the number of 1000 repetitions or not moving the points between the clusters. As a result of the implementation of this system, the crane operator does not stop while moving objects and continues his activity regardless of weight calculations. Also, production speed increased, and human error decreased.

Keywords: k-means, overhead crane, melt weight, weight estimation, swing problem

Procedia PDF Downloads 51
203 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems

Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer

Abstract:

This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.

Keywords: cascade control, multi-Loop control systems, multiobjective optimization, optimal control

Procedia PDF Downloads 114
202 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning

Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho

Abstract:

Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.

Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning

Procedia PDF Downloads 46
201 A Compact Extended Laser Diode Cavity Centered at 780 nm for Use in High-Resolution Laser Spectroscopy

Authors: J. Alvarez, J. Pimienta, R. Sarmiento

Abstract:

Diode lasers working in free mode present different shifting and broadening determined by external factors such as temperature, current or mechanical vibrations, and they are not more useful in applications such as spectroscopy, metrology, and cooling of atoms, among others. Different configurations can reduce the spectral width of a laser; one of the most effective is to extend the optical resonator of the laser diode and use optical feedback either with the help of a partially reflective mirror or with a diffraction grating; this latter configuration is not only allowed to reduce the spectral width of the laser line but also to coarsely adjust its working wavelength, within a wide range typically ~ 10nm by slightly varying the angle of the diffraction grating. Two settings are commonly used for this purpose, the Littrow configuration and the Littmann Metcalf. In this paper, we present the design, construction, and characterization of a compact extended laser cavity in Littrow configuration. The designed cavity is compact and was machined on an aluminum block using computer numerical control (CNC); it has a mass of only 380 g. The design was tested on laser diodes with different wavelengths, 650nm, 780nm, and 795 nm, but can be equally efficient at other wavelengths. This report details the results obtained from the extended cavity working at a wavelength of 780 nm, with an output power of around 35mW and a line width of less than 1Mhz. The cavity was used to observe the spectrum of the corresponding Rubidium D2 line. By modulating the current and with the help of phase detection techniques, a dispersion signal with an excellent signal-to-noise ratio was generated that allowed the stabilization of the laser to a transition of the hyperfine structure of Rubidium with an integral proportional controller (PI) circuit made with precision operational amplifiers.

Keywords: Littrow, Littman-Metcalf, line width, laser stabilization, hyperfine structure

Procedia PDF Downloads 178
200 Marker-Controlled Level-Set for Segmenting Breast Tumor from Thermal Images

Authors: Swathi Gopakumar, Sruthi Krishna, Shivasubramani Krishnamoorthy

Abstract:

Contactless, painless and radiation-free thermal imaging technology is one of the preferred screening modalities for detection of breast cancer. However, poor signal to noise ratio and the inexorable need to preserve edges defining cancer cells and normal cells, make the segmentation process difficult and hence unsuitable for computer-aided diagnosis of breast cancer. This paper presents key findings from a research conducted on the appraisal of two promising techniques, for the detection of breast cancer: (I) marker-controlled, Level-set segmentation of anisotropic diffusion filtered preprocessed image versus (II) Segmentation using marker-controlled level-set on a Gaussian-filtered image. Gaussian-filtering processes the image uniformly, whereas anisotropic filtering processes only in specific areas of a thermographic image. The pre-processed (Gaussian-filtered and anisotropic-filtered) images of breast samples were then applied for segmentation. The segmentation of breast starts with initial level-set function. In this study, marker refers to the position of the image to which initial level-set function is applied. The markers are generally placed on the left and right side of the breast, which may vary with the breast size. The proposed method was carried out on images from an online database with samples collected from women of varying breast characteristics. It was observed that the breast was able to be segmented out from the background by adjustment of the markers. From the results, it was observed that as a pre-processing technique, anisotropic filtering with level-set segmentation, preserved the edges more effectively than Gaussian filtering. Segmented image, by application of anisotropic filtering was found to be more suitable for feature extraction, enabling automated computer-aided diagnosis of breast cancer.

Keywords: anisotropic diffusion, breast, Gaussian, level-set, thermograms

Procedia PDF Downloads 346
199 Improvement of Thermal Comfort Conditions in an Urban Space "Case Study: The Square of Independence, Setif, Algeria"

Authors: Ballout Amor, Yasmina Bouchahm, Lacheheb Dhia Eddine Zakaria

Abstract:

Several studies all around the world were conducted on the phenomenon of the urban heat island, and referring to the results obtained, one of the most important factors that influence this phenomenon is the mineralization of the cities which means the reducing of evaporative urban surfaces, replacing vegetation and wetlands with concrete and asphalt. The use of vegetation and water can change the urban environment and improve comfort, thus reduce the heat island. The trees act as a mask to the sun, wind, and sound, and also as a source of humidity which reduces air temperature and surrounding surfaces. Water also acts as a buffer to noise; it is also a source of moisture and regulates temperature not to mention the psychological effect on humans. Our main objective in this paper is to determine the impact of vegetation, ponds and fountains on the urban micro climate in general and on the thermal comfort of people along the Independence square in the Algerian city of Sétif, which is a semi-arid climate, in particularly. In order to reach this objective, a comparative study between different scenarios has been done; the use of the Envi-met program enabled us to model the urban environment of the Independence Square and to study the possibility of improving the conditions of comfort by adding an amount of vegetation and water ponds. After studying the results obtained (temperature, relative humidity, wind speed, PMV and PPD indicators), the efficiency of the additions we've made on the square was confirmed and this is what helped us to confirm our assumptions regarding the terms of comfort in the studied site, and in the end we are trying to develop recommendations and solutions which may contribute to improve the conditions for greater comfort in the Independence square.

Keywords: comfort in outer space, urban environment, scenarisation, vegetation, water ponds, public square, simulation

Procedia PDF Downloads 416
198 Building Transparent Supply Chains through Digital Tracing

Authors: Penina Orenstein

Abstract:

In today’s world, particularly with COVID-19 a constant worldwide threat, organizations need greater visibility over their supply chains more than ever before, in order to find areas for improvement and greater efficiency, reduce the chances of disruption and stay competitive. The concept of supply chain mapping is one where every process and route is mapped in detail between each vendor and supplier. The simplest method of mapping involves sourcing publicly available data including news and financial information concerning relationships between suppliers. An additional layer of information would be disclosed by large, direct suppliers about their production and logistics sites. While this method has the advantage of not requiring any input from suppliers, it also doesn’t allow for much transparency beyond the first supplier tier and may generate irrelevant data—noise—that must be filtered out to find the actionable data. The primary goal of this research is to build data maps of supply chains by focusing on a layered approach. Using these maps, the secondary goal is to address the question as to whether the supply chain is re-engineered to make improvements, for example, to lower the carbon footprint. Using a drill-down approach, the end result is a comprehensive map detailing the linkages between tier-one, tier-two, and tier-three suppliers super-imposed on a geographical map. The driving force behind this idea is to be able to trace individual parts to the exact site where they’re manufactured. In this way, companies can ensure sustainability practices from the production of raw materials through the finished goods. The approach allows companies to identify and anticipate vulnerabilities in their supply chain. It unlocks predictive analytics capabilities and enables them to act proactively. The research is particularly compelling because it unites network science theory with empirical data and presents the results in a visual, intuitive manner.

Keywords: data mining, supply chain, empirical research, data mapping

Procedia PDF Downloads 140
197 Introduction of Mass Rapid Transit System and Its Impact on Para-Transit

Authors: Khalil Ahmad Kakar

Abstract:

In developing countries increasing the automobile and low capacity public transport (para-transit) which are creating congestion, pollution, noise, and traffic accident are the most critical quandary. These issues are under the analysis of assessors to break down the puzzle and propose sustainable urban public transport system. Kabul city is one of those urban areas that the inhabitants are suffering from lack of tolerable and friendly public transport system. The city is the most-populous and overcrowded with around 4.5 million population. The para-transit is the only dominant public transit system with a very poor level of services and low capacity vehicles (6-20 passengers). Therefore, this study after detailed investigations suggests bus rapid transit (BRT) system in Kabul City. It is aimed to mitigate the role of informal transport and decreases congestion. The research covers three parts. In the first part, aggregated travel demand modelling (four-step) is applied to determine the number of users for para-transit and assesses BRT network based on higher passenger demand for public transport mode. In the second part, state preference (SP) survey and binary logit model are exerted to figure out the utility of existing para-transit mode and planned BRT system. Finally, the impact of predicted BRT system on para-transit is evaluated. The extracted outcome based on high travel demand suggests 10 km network for the proposed BRT system, which is originated from the district tenth and it is ended at Kabul International Airport. As well as, the result from the disaggregate travel mode-choice model, based on SP and logit model indicates that the predicted mass rapid transit system has higher utility with the significant impact regarding the reduction of para-transit.

Keywords: BRT, para-transit, travel demand modelling, Kabul City, logit model

Procedia PDF Downloads 147
196 The Carbon Footprint Model as a Plea for Cities towards Energy Transition: The Case of Algiers Algeria

Authors: Hachaichi Mohamed Nour El-Islem, Baouni Tahar

Abstract:

Environmental sustainability rather than a trans-disciplinary and a scientific issue, is the main problem that characterizes all modern cities nowadays. In developing countries, this concern is expressed in a plethora of critical urban ills: traffic congestion, air pollution, noise, urban decay, increase in energy consumption and CO2 emissions which blemish cities’ landscape and might threaten citizens’ health and welfare. As in the same manner as developing world cities, the rapid growth of Algiers’ human population and increasing in city scale phenomena lead eventually to increase in daily trips, energy consumption and CO2 emissions. In addition, the lack of proper and sustainable planning of the city’s infrastructure is one of the most relevant issues from which Algiers suffers. The aim of this contribution is to estimate the carbon deficit of the City of Algiers, Algeria, using the Ecological Footprint Model (carbon footprint). In order to achieve this goal, the amount of CO2 from fuel combustion has been calculated and aggregated into five sectors (agriculture, industry, residential, tertiary and transportation); as well, Algiers’ biocapacity (CO2 uptake land) has been calculated to determine the ecological overshoot. This study shows that Algiers’ transport system is not sustainable and is generating more than 50% of Algiers total carbon footprint which cannot be sequestered by the local forest land. The aim of this research is to show that the Carbon Footprint Assessment might be a relevant indicator to design sustainable strategies/policies striving to reduce CO2 by setting in motion the energy consumption in the transportation sector and reducing the use of fossil fuels as the main energy input.

Keywords: biocapacity, carbon footprint, ecological footprint assessment, energy consumption

Procedia PDF Downloads 111
195 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models

Authors: Ainouna Bouziane

Abstract:

The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.

Keywords: electron tomography, supported catalysts, nanometrology, error assessment

Procedia PDF Downloads 45
194 Comparing Occupants’ Satisfaction in LEED Certified Office Buildings and Non-LEED Certified Office Buildings: A Case Study of Office Buildings in Egypt and Turkey

Authors: Amgad A. Farghal, Dina I. El Desouki

Abstract:

Energy consumption and users’ satisfaction were compared in three LEED certified office buildings in turkey and an office building in Egypt. The field studies were conducted in summer 2012. The measured environmental parameters in the four buildings were indoor air temperature, relative humidity, CO2 percentage and light intensity. The traditional building is located in Smart Village in Abu Rawash, Cairo, Egypt. The building was studied for 7 days resulting in 84 responds. The three rated buildings are in Istanbul; Turkey. A Platinum LEED certified office building is owned by BASF and gained a platinum certificate for new construction and major renovation. The building was studied for 3 days resulting in 13 responds. A Gold LEED certified office building is owned by BASF and gained a gold certificate for new construction and major renovation. The building was studied for 2 days resulting in 10 responds. A silver LEED certified office building is owned by Unilever and gained a silver certificate for commercial interiors. The building was studied for 7 days resulting in 84 responds. The results showed that all buildings had no significant difference regarding occupants’ satisfaction with the amount of lighting, noise level, odor and access to the outdoor view. There was significant difference between occupants’ satisfaction in LEED certified buildings and the traditional building regarding the thermal environment and the perception of the general environment (colors, carpet and decoration. The findings suggest that careful design could lead to a certified building that enhances the thermal environment and the perception of the indoor environment leading to energy consumption without scarifying occupants’ satisfaction.

Keywords: energy consumption, occupants’ satisfaction, rating systems, office buildings

Procedia PDF Downloads 382
193 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: artificial neural networks, ANN, discrete wavelet transform, DWT, gray-level co-occurrence matrix, GLCM, k-nearest neighbor, KNN, region of interest, ROI

Procedia PDF Downloads 114
192 Impact of Ventilation Systems on Indoor Air Quality in Swedish Primary School Classrooms

Authors: Sarka Langer, Despoina Teli, Blanka Cabovska, Jan-Olof Dalenbäck, Lars Ekberg, Gabriel Bekö, Pawel Wargocki, Natalia Giraldo Vasquez

Abstract:

The aim of the study was to investigate the impact of various ventilation systems on indoor climate, air pollution, chemistry, and perception. Measurements of thermal environment and indoor air quality were performed in 45 primary school classrooms in Gothenburg, Sweden. The classrooms were grouped into three categories according to their ventilation system: category A) natural or exhaust ventilation or automated window opening; category B) balanced mechanical ventilation systems with constant air volume (CAV); and category C) balanced mechanical ventilation systems with variable air volume (VAV). A questionnaire survey about indoor air quality, perception of temperature, odour, noise and light, and sensation of well-being, alertness focus, etc., was distributed among the 10-12 years old children attending the classrooms. The results (medians) showed statistically significant differences between ventilation category A and categories B and C, but not between categories B and C in air change rates, median concentrations of carbon dioxide, individual volatile organic compounds formaldehyde and isoprene, in-door-to-outdoor ozone ratios and products of ozonolysis of squalene, a constituent of human skin oils, 6-methyl-5-hepten-2-one and decanal. Median ozone concentration, ozone loss -a difference between outdoor and indoor ozone concentrations- were different only between categories A and C. Median concentration of total VOCs and a perception index based on survey responses on perceptions and sensations indoors were not significantly different. In conclusion, ventilation systems have an impact on air change rates, indoor air quality, and chemistry, but the Swedish primary school children’s perception did not differ with the ventilation systems of the classrooms.

Keywords: indoor air pollutants, indoor climate, indoor chemistry, air change rate, perception

Procedia PDF Downloads 21
191 An Intelligent Controller Augmented with Variable Zero Lag Compensation for Antilock Braking System

Authors: Benjamin Chijioke Agwah, Paulinus Chinaenye Eze

Abstract:

Antilock braking system (ABS) is one of the important contributions by the automobile industry, designed to ensure road safety in such way that vehicles are kept steerable and stable when during emergency braking. This paper presents a wheel slip-based intelligent controller with variable zero lag compensation for ABS. It is required to achieve a very fast perfect wheel slip tracking during hard braking condition and eliminate chattering with improved transient and steady state performance, while shortening the stopping distance using effective braking torque less than maximum allowable torque to bring a braking vehicle to a stop. The dynamic of a vehicle braking with a braking velocity of 30 ms⁻¹ on a straight line was determined and modelled in MATLAB/Simulink environment to represent a conventional ABS system without a controller. Simulation results indicated that system without a controller was not able to track desired wheel slip and the stopping distance was 135.2 m. Hence, an intelligent control based on fuzzy logic controller (FLC) was designed with a variable zero lag compensator (VZLC) added to enhance the performance of FLC control variable by eliminating steady state error, provide improve bandwidth to eliminate the effect of high frequency noise such as chattering during braking. The simulation results showed that FLC- VZLC provided fast tracking of desired wheel slip, eliminate chattering, and reduced stopping distance by 70.5% (39.92 m), 63.3% (49.59 m), 57.6% (57.35 m) and 50% (69.13 m) on dry, wet, cobblestone and snow road surface conditions respectively. Generally, the proposed system used effective braking torque that is less than the maximum allowable braking torque to achieve efficient wheel slip tracking and overall robust control performance on different road surfaces.

Keywords: ABS, fuzzy logic controller, variable zero lag compensator, wheel slip tracking

Procedia PDF Downloads 102
190 Development and Validation of Work Movement Task Analysis: Part 1

Authors: Mohd Zubairy Bin Shamsudin

Abstract:

Work-related Musculoskeletal Disorder (WMSDs) is one of the occupational health problems encountered by workers over the world. In Malaysia, there is increasing in trend over the years, particularly in the manufacturing sectors. Current method to observe workplace WMSDs is self-report questionnaire, observation and direct measurement. Observational method is most frequently used by the researcher and practitioner because of the simplified, quick and versatile when it applies to the worksite. However, there are some limitations identified e.g. some approach does not cover a wide spectrum of biomechanics activity and not sufficiently sensitive to assess the actual risks. This paper elucidates the development of Work Movement Task Analysis (WMTA), which is an observational tool for industrial practitioners’ especially untrained personnel to assess WMSDs risk factors and provide a basis for suitable intervention. First stage of the development protocol involved literature reviews, practitioner survey, tool validation and reliability. A total of six themes/comments were received in face validity stage. New revision of WMTA consisted of four sections of postural (neck, back, shoulder, arms, and legs) and associated risk factors; movement, load, coupling and basic environmental factors (lighting, noise, odorless, heat and slippery floor). For inter-rater reliability study shows substantial agreement among rater with K = 0.70. Meanwhile, WMTA validation shows significant association between WMTA score and self-reported pain or discomfort for the back, shoulder&arms and knee&legs with p<0.05. This tool is expected to provide new workplace ergonomic observational tool to assess WMSDs for the next stage of the case study.

Keywords: assessment, biomechanics, musculoskeletal disorders, observational tools

Procedia PDF Downloads 435
189 Sizing and Thermal Analysis of Mechanically Pumped Fluid Loop Thermal Control Technique for Small Satellite Scientific Applications

Authors: Shanmugasundaram Selvadurai, Amal Chandran

Abstract:

Small satellites have become an alternative low-cost solution for several missions to accomplish specific missions such as Earth imaging, Technology demonstration, Education, and other commercial purposes. Small satellite missions focusing on Infrared imaging applications require lower temperature for scientific instruments and such low temperature can be achieved only using external cryocoolers but the disadvantage is that they generate a large amount of waste heat. Existing passive thermal control techniques are not capable to handle such large thermal loads and hence one of the traditional active Thermal Control System (TCS) is studied for a small satellite configuration. This work aims to downscale the existing Mechanically Pumped Fluid Loop (MPFL) TCS to a 27U CubeSat platform for an imaginary scientific instrument. The temperature-sensitive detector in the instrument considered to be maintained between 130K and 150K to reduce dark current noise and increase the data quality. A Single-Phase fluid based MPFL is chosen for this system-level study and this TCS consists of a microfluid pump, a micro-cryocooler, a fluid accumulator, external heaters, flow regulators, and sensors. This work also explains the thermal control system architecture with a conceptual design, arrangement of all the components, and thermal analysis for different low orbit conditions. Sizing and extensive trade studies for the components are conducted and the results have shown that the Single-phase MPFL system is able to handle the given thermal loads and maintain the satellite’s interface temperature within the desired limit.

Keywords: active thermal control system, satellite thermal, mechanically pumped fluid loop system, cryogenics, cryocooler

Procedia PDF Downloads 210
188 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 5
187 A Pilot Study of Influences of Scan Speed on Image Quality for Digital Tomosynthesis

Authors: Li-Ting Huang, Yu-Hsiang Shen, Cing-Ciao Ke, Sheng-Pin Tseng, Fan-Pin Tseng, Yu-Ching Ni, Chia-Yu Lin

Abstract:

Chest radiography is the most common technique for the diagnosis and follow-up of pulmonary diseases. However, the lesions superimposed with normal structures are difficult to be detected in chest radiography. Chest tomosynthesis is a relatively new technique to obtain 3D section images from a set of low-dose projections acquired over a limited angular range. However, there are some limitations with chest tomosynthesis. Patients undergoing tomosynthesis have to be able to hold their breath firmly for 10 seconds. A digital tomosynthesis system with advanced reconstruction algorithm and high-stability motion mechanism was developed by our research group. The potential for the system to perform a bidirectional chest scan within 10 seconds is expected. The purpose of this study is to realize the influences of the scan speed on the image quality for our digital tomosynthesis system. The major factors that lead image blurring are the motion of the X-ray source and the patient. For the fore one, an experiment of imaging a chest phantom with three different scan speeds, which are 6 cm/s, 8 cm/s, and 15 cm/s, was proceeded to understand the scan speed influences on the image quality. For the rear factor, a normal SD (Sprague-Dawley) rat was imaged with it alive and sacrificed to assess the impact on the image quality due to breath motion. In both experiments, the profile of the ROIs (region of interest) and the CNRs (contrast-to-noise ratio) of the ROIs to the normal tissue of the reconstructed images was examined to realize the degradations of the qualities of the images. The preliminary results show that no obvious degradation of the image quality was observed with increasing scan speed, possibly due to the advanced designs for the hardware and software of the system. It implies that higher speed (15 cm/s) than that of the commercialized tomosynthesis system (12 cm/s) for the proposed system is achieved, and therefore a complete chest scan within 10 seconds is expected.

Keywords: chest radiography, digital tomosynthesis, image quality, scan speed

Procedia PDF Downloads 296
186 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 133
185 Normalized P-Laplacian: From Stochastic Game to Image Processing

Authors: Abderrahim Elmoataz

Abstract:

More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.

Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems

Procedia PDF Downloads 474
184 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: bottom elevation, MVS, river, SfM

Procedia PDF Downloads 275
183 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies

Authors: Chao-Ton Su, Li-Fei Chen

Abstract:

The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.

Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design

Procedia PDF Downloads 112