Search results for: robust optimisation model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17453

Search results for: robust optimisation model

17183 The Planner's Pentangle: A Proposal for a 21st-Century Model of Planning for Sustainable Development

Authors: Sonia Hirt

Abstract:

The Planner's Triangle, an oft-cited model that visually defined planning as the search for sustainability to balance the three basic priorities of equity, economy, and environment, has influenced planning theory and practice for a quarter of a century. In this essay, we argue that the triangle requires updating and expansion. Even if planners keep sustainability as their key core aspiration at the center of their imaginary geometry, the triangle's vertices have to be rethought. Planners should move on to a 21st-century concept. We propose a Planner's Pentangle with five basic priorities as vertices of a new conceptual polygon. These five priorities are Wellbeing, Equity, Economy, Environment, and Esthetics (WE⁴). The WE⁴ concept more accurately and fully represents planning’s history. This is especially true in the United States, where public art and public health played pivotal roles in the establishment of the profession in the late 19th and early 20th centuries. It also more accurately represents planning’s future. Both health/wellness and aesthetic concerns are becoming increasingly important in the 21st century. The pentangle can become an effective tool for understanding and visualizing planning's history and present. Planning has a long history of representing urban presents and future as conceptual models in visual form. Such models can play an important role in understanding and shaping practice. For over two decades, one such model, the Planner's Triangle, stood apart as the expression of planning's pursuit for sustainability. But if the model is outdated and insufficiently robust, it can diminish our understanding of planning practice, as well as the appreciation of the profession among non-planners. Thus, we argue for a new conceptual model of what planners do.

Keywords: sustainable development, planning for sustainable development, planner's triangle, planner's pentangle, planning and health, planning and art, planning history

Procedia PDF Downloads 121
17182 A Multi-Objective Methodology for Selecting Lean Initiatives in Modular Construction Companies

Authors: Saba Shams Bidhendi, Steven Goh, Andrew Wandel

Abstract:

The implementation of lean manufacturing initiatives has produced significant impacts in improving operational performance and reducing manufacturing wastes in the production process. However, selecting an appropriate set of lean strategies is critical to avoid misapplication of the lean manufacturing techniques and consequential increase in non-value-adding activities. To the author’s best knowledge, there is currently no methodology to select lean strategies that considers their impacts on manufacturing wastes and performance metrics simultaneously. In this research, a multi-objective methodology is proposed that suggests an appropriate set of lean initiatives based on their impacts on performance metrics and manufacturing wastes and within manufacturers’ resource limitation. The proposed methodology in this research suggests the best set of lean initiatives for implementation that have highest impacts on identified critical performance metrics and manufacturing wastes. Therefore, manufacturers can assure that implementing suggested lean tools improves their production performance and reduces manufacturing wastes at the same time. A case study was conducted to show the effectiveness and validate the proposed model and methodologies.

Keywords: lean manufacturing, lean strategies, manufacturing wastes, manufacturing performance, optimisation, decision making

Procedia PDF Downloads 175
17181 Similar Script Character Recognition on Kannada and Telugu

Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy

Abstract:

This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.

Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN

Procedia PDF Downloads 36
17180 DWT-SATS Based Detection of Image Region Cloning

Authors: Michael Zimba

Abstract:

A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency sub-band of the DWT of the suspicious image thereby leaving valuable information in the other three sub-bands, the proposed algorithm simultaneously extracts features from all the four sub-bands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.

Keywords: affine transformation, discrete wavelet transform, radix sort, SATS

Procedia PDF Downloads 212
17179 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter

Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri

Abstract:

Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.

Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion

Procedia PDF Downloads 677
17178 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 32
17177 Further Analysis of Global Robust Stability of Neural Networks with Multiple Time Delays

Authors: Sabri Arik

Abstract:

In this paper, we study the global asymptotic robust stability of delayed neural networks with norm-bounded uncertainties. By employing the Lyapunov stability theory and Homeomorphic mapping theorem, we derive some new types of sufficient conditions ensuring the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slopebounded activation functions. An important aspect of our results is their low computational complexity as the reported results can be verified by checking some properties symmetric matrices associated with the uncertainty sets of network parameters. The obtained results are shown to be generalization of some of the previously published corresponding results. Some comparative numerical examples are also constructed to compare our results with some closely related existing literature results.

Keywords: neural networks, delayed systems, lyapunov functionals, stability analysis

Procedia PDF Downloads 508
17176 Model Averaging for Poisson Regression

Authors: Zhou Jianhong

Abstract:

Model averaging is a desirable approach to deal with model uncertainty, which, however, has rarely been explored for Poisson regression. In this paper, we propose a model averaging procedure based on an unbiased estimator of the expected Kullback-Leibler distance for the Poisson regression. Simulation study shows that the proposed model average estimator outperforms some other commonly used model selection and model average estimators in some situations. Our proposed methods are further applied to a real data example and the advantage of this method is demonstrated again.

Keywords: model averaging, poission regression, Kullback-Leibler distance, statistics

Procedia PDF Downloads 502
17175 Optimal Tamping for Railway Tracks, Reducing Railway Maintenance Expenditures by the Use of Integer Programming

Authors: Rui Li, Min Wen, Kim Bang Salling

Abstract:

For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euros per kilometer per year. In order to reduce such maintenance expenditures, this paper presents a mixed 0-1 linear mathematical model designed to optimize the predictive railway tamping activities for ballast track in the planning horizon of three to four years. The objective function is to minimize the tamping machine actual costs. The approach of the research is using the simple dynamic model for modelling condition-based tamping process and the solution method for finding optimal condition-based tamping schedule. Seven technical and practical aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality recovery on the track quality after tamping operation; (5) Tamping machine operation practices (6) tamping budgets and (7) differentiating the open track from the station sections. A Danish railway track between Odense and Fredericia with 42.6 km of length is applied for a time period of three and four years in the proposed maintenance model. The generated tamping schedule is reasonable and robust. Based on the result from the Danish railway corridor, the total costs can be reduced significantly (50%) than the previous model which is based on optimizing the number of tamping. The different maintenance strategies have been discussed in the paper. The analysis from the results obtained from the model also shows a longer period of predictive tamping planning has more optimal scheduling of maintenance actions than continuous short term preventive maintenance, namely yearly condition-based planning.

Keywords: integer programming, railway tamping, predictive maintenance model, preventive condition-based maintenance

Procedia PDF Downloads 422
17174 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures

Authors: Milad Abbasi

Abstract:

Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.

Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network

Procedia PDF Downloads 126
17173 A Machine Learning-Based Approach to Capture Extreme Rainfall Events

Authors: Willy Mbenza, Sho Kenjiro

Abstract:

Increasing efforts are directed towards a better understanding and foreknowledge of extreme precipitation likelihood, given the adverse effects associated with their occurrence. This knowledge plays a crucial role in long-term planning and the formulation of effective emergency response. However, predicting extreme events reliably presents a challenge to conventional empirical/statistics due to the involvement of numerous variables spanning different time and space scales. In the recent time, Machine Learning has emerged as a promising tool for predicting the dynamics of extreme precipitation. ML techniques enables the consideration of both local and regional physical variables that have a strong influence on the likelihood of extreme precipitation. These variables encompasses factors such as air temperature, soil moisture, specific humidity, aerosol concentration, among others. In this study, we develop an ML model that incorporates both local and regional variables while establishing a robust relationship between physical variables and precipitation during the downscaling process. Furthermore, the model provides valuable information on the frequency and duration of a given intensity of precipitation.

Keywords: machine learning (ML), predictions, rainfall events, regional variables

Procedia PDF Downloads 70
17172 Modern Well Logs Technology to Improve Geological Model for Libyan Deep Sand Stone Reservoir

Authors: Tarek S. Duzan, Fisal Ben Ammer, Mohamed Sula

Abstract:

In some places within Sirt Basin-Libya, it has been noticed that seismic data below pre-upper cretaceous unconformity (PUK) is hopeless to resolve the large-scale structural features and is unable to fully determine reservoir delineation. Seismic artifacts (multiples) are observed in the reservoir zone (Nubian Formation) below PUK, which complicate the process of seismic interpretation. The nature of the unconformity and the structures below are still ambiguous and not fully understood which generates a significant gap in characterizing the geometry of the reservoir, the uncertainty accompanied with lack of reliable seismic data creates difficulties in building a robust geological model. High resolution dipmeter is highly useful in steeply dipping zones. This paper uses FMl and OBMl borehole images (dipmeter) to analyze the structures below the PUK unconformity from two wells drilled recently in the North Gialo field (a mature reservoir). In addition, borehole images introduce new evidences that the PUK unconformity is angular and the bedding planes within the Nubian formation (below PUK) are significantly titled. Structural dips extracted from high resolution borehole images are used to construct a new geological model by the utilization of latest software technology. Therefore, it is important to use the advance well logs technology such as FMI-HD for any future drilling and up-date the existing model in order to minimize the structural uncertainty.

Keywords: FMI (formation micro imager), OBMI (oil base mud imager), UBI (ultra sonic borehole imager), nub sandstone reservoir in North gialo

Procedia PDF Downloads 303
17171 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 104
17170 The Role of Institutional Quality and Institutional Quality Distance on Trade: The Case of Agricultural Trade within the Southern African Development Community Region

Authors: Kgolagano Mpejane

Abstract:

The study applies a New Institutional Economics (NIE) analytical framework to trade in developing economies by assessing the impacts of institutional quality and institutional quality distance on agricultural trade using a panel data of 15 Southern African Development Community (SADC) countries from the years 1991-2010. The issue of institutions on agricultural trade has not been accorded the necessary attention in the literature, particularly in developing economies. Therefore, the paper empirically tests the gravity model of international trade by measuring the impact of political, economic and legal institutions on intra SADC agricultural trade. The gravity model is noted for its exploratory power and strong theoretical foundation. However, the model has statistical shortcomings in dealing with zero trade values and heteroscedasticity residuals leading to biased results. Therefore, this study employs a two stage Heckman selection model with a Probit equation to estimate the influence of institutions on agricultural trade. The selection stages include the inverse Mills ratio to account for the variable bias of the gravity model. The Heckman model accounts for zero trade values and is robust in the presence of heteroscedasticity. The empirical results of the study support the NIE theory premise that institutions matter in trade. The results demonstrate that institutions determine bilateral agricultural trade on different margins with political institutions having positive and significant influence on bilateral agricultural trade flows within the SADC region. Legal and economic institutions have significant and negative effects on SADC trade. Furthermore, the results of this study confirm that institutional quality distance influences agricultural trade. Legal and political institutional distance have a positive and significant influence on bilateral agricultural trade while the influence of economic, institutional quality is negative and insignificant. The results imply that nontrade barriers, in the form of institutional quality and institutional quality distance, are significant factors limiting intra SADC agricultural trade. Therefore, gains from intra SADC agricultural trade can be attained through the improvement of institutions within the region.

Keywords: agricultural trade, institutions, gravity model, SADC

Procedia PDF Downloads 136
17169 Implementation and Validation of a Damage-Friction Constitutive Model for Concrete

Authors: L. Madouni, M. Ould Ouali, N. E. Hannachi

Abstract:

Two constitutive models for concrete are available in ABAQUS/Explicit, the Brittle Cracking Model and the Concrete Damaged Plasticity Model, and their suitability and limitations are well known. The aim of the present paper is to implement a damage-friction concrete constitutive model and to evaluate the performance of this model by comparing the predicted response with experimental data. The constitutive formulation of this material model is reviewed. In order to have consistent results, the parameter identification and calibration for the model have been performed. Several numerical simulations are presented in this paper, whose results allow for validating the capability of the proposed model for reproducing the typical nonlinear performances of concrete structures under different monotonic and cyclic load conditions. The results of the evaluation will be used for recommendations concerning the application and further improvements of the investigated model.

Keywords: Abaqus, concrete, constitutive model, numerical simulation

Procedia PDF Downloads 348
17168 Multi-Point Dieless Forming Product Defect Reduction Using Reliability-Based Robust Process Optimization

Authors: Misganaw Abebe Baye, Ji-Woo Park, Beom-Soo Kang

Abstract:

The product quality of multi-point dieless forming (MDF) is identified to be dependent on the process parameters. Moreover, a certain variation of friction and material properties may have a substantially worse influence on the final product quality. This study proposed on how to compensate the MDF product defects by minimizing the sensitivity of noise parameter variations. This can be attained by reliability-based robust optimization (RRO) technique to obtain the optimal process setting of the controllable parameters. Initially two MDF Finite Element (FE) simulations of AA3003-H14 saddle shape showed a substantial amount of dimpling, wrinkling, and shape error. FE analyses are consequently applied on ABAQUS commercial software to obtain the correlation between the control process setting and noise variation with regard to the product defects. The best prediction models are chosen from the family of metamodels to swap the computational expensive FE simulation. Genetic algorithm (GA) is applied to determine the optimal process settings of the control parameters. Monte Carlo Analysis (MCA) is executed to determine how the noise parameter variation affects the final product quality. Finally, the RRO FE simulation and the experimental result show that the amendment of the control parameters in the final forming process leads to a considerably better-quality product.

Keywords: dimpling, multi-point dieless forming, reliability-based robust optimization, shape error, variation, wrinkling

Procedia PDF Downloads 234
17167 Extended Kalman Filter Based Direct Torque Control of Permanent Magnet Synchronous Motor

Authors: Liang Qin, Hanan M. D. Habbi

Abstract:

A robust sensorless speed for permanent magnet synchronous motor (PMSM) has been presented for estimation of stator flux components and rotor speed based on The Extended Kalman Filter (EKF). The model of PMSM and its EKF models are modeled in Matlab /Sirnulink environment. The proposed EKF speed estimation method is also proved insensitive to the PMSM parameter variations. Simulation results demonstrate a good performance and robustness.

Keywords: DTC, Extended Kalman Filter (EKF), PMSM, sensorless control, anti-windup PI

Procedia PDF Downloads 644
17166 Mathematical Modelling of a Low Tip Speed Ratio Wind Turbine for System Design Evaluation

Authors: Amir Jalalian-Khakshour, T. N. Croft

Abstract:

Vertical Axis Wind Turbine (VAWT) systems are becoming increasingly popular as they have a number of advantages over traditional wind turbines. The advantages are reliability, ease of transportation and manufacturing. These attributes could make these technologies useful in developing economies. The performance characteristics of a VAWT are different from a horizontal axis wind turbine, which can be attributed to the low tip speed ratio operation. To unlock the potential of these VAWT systems, the operational behaviour in a number of system topologies and environmental conditions needs to be understood. In this study, a non-linear dynamic simulation method was developed in Matlab and validated against in field data of a large scale, 8-meter rotor diameter prototype. This simulation method has been utilised to determine the performance characteristics of a number of control methods and system topologies. The motivation for this research was to develop a simulation method which accurately captures the operating behaviour and is computationally inexpensive. The model was used to evaluate the performance through parametric studies and optimisation techniques. The study gave useful insights into the applications and energy generation potential of this technology.

Keywords: power generation, renewable energy, rotordynamics, wind energy

Procedia PDF Downloads 287
17165 Clustering-Based Threshold Model for Condition Rating of Concrete Bridge Decks

Authors: M. Alsharqawi, T. Zayed, S. Abu Dabous

Abstract:

To ensure safety and serviceability of bridge infrastructure, accurate condition assessment and rating methods are needed to provide basis for bridge Maintenance, Repair and Replacement (MRR) decisions. In North America, the common practices to assess condition of bridges are through visual inspection. These practices are limited to detect surface defects and external flaws. Further, the thresholds that define the severity of bridge deterioration are selected arbitrarily. The current research discusses the main deteriorations and defects identified during visual inspection and Non-Destructive Evaluation (NDE). NDE techniques are becoming popular in augmenting the visual examination during inspection to detect subsurface defects. Quality inspection data and accurate condition assessment and rating are the basis for determining appropriate MRR decisions. Thus, in this paper, a novel method for bridge condition assessment using the Quality Function Deployment (QFD) theory is utilized. The QFD model is designed to provide an integrated condition by evaluating both the surface and subsurface defects for concrete bridges. Moreover, an integrated condition rating index with four thresholds is developed based on the QFD condition assessment model and using K-means clustering technique. Twenty case studies are analyzed by applying the QFD model and implementing the developed rating index. The results from the analyzed case studies show that the proposed threshold model produces robust MRR recommendations consistent with decisions and recommendations made by bridge managers on these projects. The proposed method is expected to advance the state of the art of bridges condition assessment and rating.

Keywords: concrete bridge decks, condition assessment and rating, quality function deployment, k-means clustering technique

Procedia PDF Downloads 205
17164 Optimization of Waste Plastic to Fuel Oil Plants' Deployment Using Mixed Integer Programming

Authors: David Muyise

Abstract:

Mixed Integer Programming (MIP) is an approach that involves the optimization of a range of decision variables in order to minimize or maximize a particular objective function. The main objective of this study was to apply the MIP approach to optimize the deployment of waste plastic to fuel oil processing plants in Uganda. The processing plants are meant to reduce plastic pollution by pyrolyzing the waste plastic into a cleaner fuel that can be used to power diesel/paraffin engines, so as (1) to reduce the negative environmental impacts associated with plastic pollution and also (2) to curb down the energy gap by utilizing the fuel oil. A programming model was established and tested in two case study applications that are, small-scale applications in rural towns and large-scale deployment across major cities in the country. In order to design the supply chain, optimal decisions on the types of waste plastic to be processed, size, location and number of plants, and downstream fuel applications were concurrently made based on the payback period, investor requirements for capital cost and production cost of fuel and electricity. The model comprises qualitative data gathered from waste plastic pickers at landfills and potential investors, and quantitative data obtained from primary research. It was found out from the study that a distributed system is suitable for small rural towns, whereas a decentralized system is only suitable for big cities. Small towns of Kalagi, Mukono, Ishaka, and Jinja were found to be the ideal locations for the deployment of distributed processing systems, whereas Kampala, Mbarara, and Gulu cities were found to be the ideal locations initially utilize the decentralized pyrolysis technology system. We conclude that the model findings will be most important to investors, engineers, plant developers, and municipalities interested in waste plastic to fuel processing in Uganda and elsewhere in developing economy.

Keywords: mixed integer programming, fuel oil plants, optimisation of waste plastics, plastic pollution, pyrolyzing

Procedia PDF Downloads 111
17163 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 129
17162 Orthogonal Metal Cutting Simulation of Steel AISI 1045 via Smoothed Particle Hydrodynamic Method

Authors: Seyed Hamed Hashemi Sohi, Gerald Jo Denoga

Abstract:

Machining or metal cutting is one of the most widely used production processes in industry. The quality of the process and the resulting machined product depends on parameters like tool geometry, material, and cutting conditions. However, the relationships of these parameters to the cutting process are often based mostly on empirical knowledge. In this study, computer modeling and simulation using LS-DYNA software and a Smoothed Particle Hydrodynamic (SPH) methodology, was performed on the orthogonal metal cutting process to analyze three-dimensional deformation of AISI 1045 medium carbon steel during machining. The simulation was performed using the following constitutive models: the Power Law model, the Johnson-Cook model, and the Zerilli-Armstrong models (Z-A). The outcomes were compared against the simulated results obtained by Cenk Kiliçaslan using the Finite Element Method (FEM) and the empirical results of Jaspers and Filice. The analysis shows that the SPH method combined with the Zerilli-Armstrong constitutive model is a viable alternative to simulating the metal cutting process. The tangential force was overestimated by 7%, and the normal force was underestimated by 16% when compared with empirical values. The simulation values for flow stress versus strain at various temperatures were also validated against empirical values. The SPH method using the Z-A model has also proven to be robust against issues of time-scaling. Experimental work was also done to investigate the effects of friction, rake angle and tool tip radius on the simulation.

Keywords: metal cutting, smoothed particle hydrodynamics, constitutive models, experimental, cutting forces analyses

Procedia PDF Downloads 243
17161 Microwave-Assisted Chemical Pre-Treatment of Waste Sorghum Leaves: Process Optimization and Development of an Intelligent Model for Determination of Volatile Compound Fractions

Authors: Daneal Rorke, Gueguim Kana

Abstract:

The shift towards renewable energy sources for biofuel production has received increasing attention. However, the use and pre-treatment of lignocellulosic material are inundated with the generation of fermentation inhibitors which severely impact the feasibility of bioprocesses. This study reports the profiling of all volatile compounds generated during microwave assisted chemical pre-treatment of sorghum leaves. Furthermore, the optimization of reducing sugar (RS) from microwave assisted acid pre-treatment of sorghum leaves was assessed and gave a coefficient of determination (R2) of 0.76, producing an optimal RS yield of 2.74 g FS/g substrate. The development of an intelligent model to predict volatile compound fractions gave R2 values of up to 0.93 for 21 volatile compounds. Sensitivity analysis revealed that furfural and phenol exhibited high sensitivity to acid concentration, alkali concentration and S:L ratio, while phenol showed high sensitivity to microwave duration and intensity as well. These findings illustrate the potential of using an intelligent model to predict the volatile compound fraction profile of compounds generated during pre-treatment of sorghum leaves in order to establish a more robust and efficient pre-treatment regime for biofuel production.

Keywords: artificial neural networks, fermentation inhibitors, lignocellulosic pre-treatment, sorghum leaves

Procedia PDF Downloads 224
17160 Model Driven Architecture Methodologies: A Review

Authors: Arslan Murtaza

Abstract:

Model Driven Architecture (MDA) is technique presented by OMG (Object Management Group) for software development in which different models are proposed and converted them into code. The main plan is to identify task by using PIM (Platform Independent Model) and transform it into PSM (Platform Specific Model) and then converted into code. In this review paper describes some challenges and issues that are faced in MDA, type and transformation of models (e.g. CIM, PIM and PSM), and evaluation of MDA-based methodologies.

Keywords: OMG, model driven rrchitecture (MDA), computation independent model (CIM), platform independent model (PIM), platform specific model(PSM), MDA-based methodologies

Procedia PDF Downloads 438
17159 The Influence of the Concentration and Temperature on the Rheological Behavior of Carbonyl-Methylcellulose

Authors: Mohamed Rabhi, Kouider Halim Benrahou

Abstract:

The rheological properties of the carbonyl-methylcellulose (CMC), of different concentrations (25000, 50000, 60000, 80000 and 100000 ppm) and different temperatures were studied. We found that the rheological behavior of all CMC solutions presents a pseudo-plastic behavior, it follows the model of Ostwald-de Waele. The objective of this work is the modeling of flow by the CMC Cross model. The Cross model gives us the variation of the viscosity according to the shear rate. This model allowed us to adjust more clearly the rheological characteristics of CMC solutions. A comparison between the Cross model and the model of Ostwald was made. Cross the model fitting parameters were determined by a numerical simulation to make an approach between the experimental curve and those given by the two models. Our study has shown that the model of Cross, describes well the flow of "CMC" for low concentrations.

Keywords: CMC, rheological modeling, Ostwald model, cross model, viscosity

Procedia PDF Downloads 373
17158 Analyzing the Effects of Real Income and Biomass Energy Consumption on Carbon Dioxide (CO2) Emissions: Empirical Evidence from the Panel of Biomass-Consuming Countries

Authors: Eyup Dogan

Abstract:

This empirical aims to analyze the impacts of real income and biomass energy consumption on the level of emissions in the EKC model for the panel of biomass-consuming countries over the period 1980-2011. Because we detect the presence of cross-sectional dependence and heterogeneity across countries for the analyzed data, we use panel estimation methods robust to cross-sectional dependence and heterogeneity. The CADF and the CIPS panel unit root tests indicate that carbon emissions, real income and biomass energy consumption are stationary at the first-differences. The LM bootstrap panel cointegration test shows that the analyzed variables are cointegrated. Results from the panel group-mean DOLS and the panel group-mean FMOLS estimators show that increase in biomass energy consumption decreases CO2 emissions and the EKC hypothesis is validated. Therefore, countries are advised to boost their production and increase the use of biomass energy for lower level of emissions.

Keywords: biomass energy, CO2 emissions, EKC model, heterogeneity, cross-sectional dependence

Procedia PDF Downloads 277
17157 3D Model of Rain-Wind Induced Vibration of Inclined Cable

Authors: Viet-Hung Truong, Seung-Eock Kim

Abstract:

Rain–wind induced vibration of inclined cable is a special aerodynamic phenomenon because it is easily influenced by many factors, especially the distribution of rivulet and wind velocity. This paper proposes a new 3D model of inclined cable, based on single degree-of-freedom model. Aerodynamic forces are firstly established and verified with the existing results from a 2D model. The 3D model of inclined cable is developed. The 3D model is then applied to assess the effects of wind velocity distribution and the continuity of rivulets on the cable. Finally, an inclined cable model with small sag is investigated.

Keywords: 3D model, rain - wind induced vibration, rivulet, analytical model

Procedia PDF Downloads 472
17156 3-Dimensional Contamination Conceptual Site Model: A Case Study Illustrating the Multiple Applications of Developing and Maintaining a 3D Contamination Model during an Active Remediation Project on a Former Urban Gasworks Site

Authors: Duncan Fraser

Abstract:

A 3-Dimensional (3D) conceptual site model was developed using the Leapfrog Works® platform utilising a comprehensive historical dataset for a large former Gasworks site in Fitzroy, Melbourne. The gasworks had been constructed across two fractured geological units with varying hydraulic conductivities. A Newer Volcanic (basaltic) outcrop covered approximately half of the site and was overlying a fractured Melbourne formation (Siltstone) bedrock outcropping over the remaining portion. During the investigative phase of works, a dense non-aqueous phase liquid (DNAPL) plume (coal tar) was identified within both geological units in the subsurface originating from multiple sources, including gasholders, tar wells, condensers, and leaking pipework. The first stage of model development was undertaken to determine the horizontal and vertical extents of the coal tar in the subsurface and assess the potential causality between potential sources, plume location, and site geology. Concentrations of key contaminants of interest (COIs) were also interpolated within Leapfrog to refine the distribution of contaminated soils. The model was subsequently used to develop a robust soil remediation strategy and achieve endorsement from an Environmental Auditor. A change in project scope, following the removal and validation of the three former gasholders, necessitated the additional excavation of a significant volume of residual contaminated rock to allow for the future construction of two-story underground basements. To assess financial liabilities associated with the offsite disposal or thermal treatment of material, the 3D model was updated with three years of additional analytical data from the active remediation phase of works. Chemical concentrations and the residual tar plume within the rock fractures were modelled to pre-classify the in-situ material and enhance separation strategies to prevent the unnecessary treatment of material and reduce costs.

Keywords: 3D model, contaminated land, Leapfrog, remediation

Procedia PDF Downloads 116
17155 Robust and Real-Time Traffic Counting System

Authors: Hossam M. Moftah, Aboul Ella Hassanien

Abstract:

In the recent years the importance of automatic traffic control has increased due to the traffic jams problem especially in big cities for signal control and efficient traffic management. Traffic counting as a kind of traffic control is important to know the road traffic density in real time. This paper presents a fast and robust traffic counting system using different image processing techniques. The proposed system is composed of the following four fundamental building phases: image acquisition, pre-processing, object detection, and finally counting the connected objects. The object detection phase is comprised of the following five steps: subtracting the background, converting the image to binary, closing gaps and connecting nearby blobs, image smoothing to remove noises and very small objects, and detecting the connected objects. Experimental results show the great success of the proposed approach.

Keywords: traffic counting, traffic management, image processing, object detection, computer vision

Procedia PDF Downloads 278
17154 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 54