Search results for: forecasting accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4000

Search results for: forecasting accuracy

1720 A Fuzzy Approach to Liver Tumor Segmentation with Zernike Moments

Authors: Abder-Rahman Ali, Antoine Vacavant, Manuel Grand-Brochier, Adélaïde Albouy-Kissi, Jean-Yves Boire

Abstract:

In this paper, we present a new segmentation approach for liver lesions in regions of interest within MRI (Magnetic Resonance Imaging). This approach, based on a two-cluster Fuzzy C-Means methodology, considers the parameter variable compactness to handle uncertainty. Fine boundaries are detected by a local recursive merging of ambiguous pixels with a sequential forward floating selection with Zernike moments. The method has been tested on both synthetic and real images. When applied on synthetic images, the proposed approach provides good performance, segmentations obtained are accurate, their shape is consistent with the ground truth, and the extracted information is reliable. The results obtained on MR images confirm such observations. Our approach allows, even for difficult cases of MR images, to extract a segmentation with good performance in terms of accuracy and shape, which implies that the geometry of the tumor is preserved for further clinical activities (such as automatic extraction of pharmaco-kinetics properties, lesion characterization, etc).

Keywords: defuzzification, floating search, fuzzy clustering, Zernike moments

Procedia PDF Downloads 442
1719 Comparison of Efficient Production of Small Module Gears

Authors: Vaclav Musil, Robert Cep, Sarka Malotova, Jiri Hajnys, Frantisek Spalek

Abstract:

The new designs of satellite gears comprising a number of small gears pose high requirements on the precise production of small module gears. The objective of the experimental activity stated in this article was to compare the conventional rolling gear cutting technology with the modern wire electrical discharge machining (WEDM) technology for the production of small module gear m=0.6 mm (thickness of 2.5 mm and material 30CrMoV9). The WEDM technology lies in copying the profile of gearing from the rendered trajectory which is then transferred to the track of a wire electrode. During the experiment, we focused on the comparison of these production methods. Main measured parameters which significantly influence the lifetime and noise was chosen. The first parameter was to compare the precision of gearing profile in respect to the mathematic model. The second monitored parameter was the roughness and surface topology of the gear tooth side. The experiment demonstrated high accuracy of WEDM technology, but a low quality of machined surface.

Keywords: precision of gearing, small module gears, surface topology, WEDM technology

Procedia PDF Downloads 219
1718 Nonlinear Analysis of Shear Deformable Deep Beam Resting on Nonlinear Two-Parameter Random Soil

Authors: M. Seguini, D. Nedjar

Abstract:

In this paper, the nonlinear analysis of Timoshenko beam undergoing moderate large deflections and resting on nonlinear two-parameter random foundation is presented, taking into account the effects of shear deformation, beam’s properties variation and the spatial variability of soil characteristics. The finite element probabilistic analysis has been performed by using Timoshenko beam theory with the Von Kàrmàn nonlinear strain-displacement relationships combined to Vanmarcke theory and Monte Carlo simulations, which is implemented in a Matlab program. Numerical examples of the newly developed model is conducted to confirm the efficiency and accuracy of this later and the importance of accounting for the foundation second parameter (Winkler-Pasternak). Thus, the results obtained from the developed model are presented and compared with those available in the literature to examine how the consideration of the shear and spatial variability of soil’s characteristics affects the response of the system.

Keywords: nonlinear analysis, soil-structure interaction, large deflection, Timoshenko beam, Euler-Bernoulli beam, Winkler foundation, Pasternak foundation, spatial variability

Procedia PDF Downloads 311
1717 Intrusion Detection System Using Linear Discriminant Analysis

Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou

Abstract:

Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.

Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99

Procedia PDF Downloads 216
1716 Resolution and Experimental Validation of the Asymptotic Model of a Viscous Laminar Supersonic Flow around a Thin Airfoil

Authors: Eddegdag Nasser, Naamane Azzeddine, Radouani Mohammed, Ensam Meknes

Abstract:

In this study, we are interested in the asymptotic modeling of the two-dimensional stationary supersonic flow of a viscous compressible fluid around wing airfoil. The aim of this article is to solve the partial differential equations of the flow far from the leading edge and near the wall using the triple-deck technique is what brought again in precision according to the principle of least degeneration. In order to validate our theoretical model, these obtained results will be compared with the experimental results. The comparison of the results of our model with experimentation has shown that they are quantitatively acceptable compared to the obtained experimental results. The experimental study was conducted using the AF300 supersonic wind tunnel and a NACA Reduced airfoil model with two pressure Taps on extrados. In this experiment, we have considered the incident upstream supersonic Mach number over a dissymmetric NACA airfoil wing. The validation and the accuracy of the results support our model.

Keywords: supersonic, viscous, triple deck technique, asymptotic methods, AF300 supersonic wind tunnel, reduced airfoil model

Procedia PDF Downloads 224
1715 DOA Estimation Using Golden Section Search

Authors: Niharika Verma, Sandeep Santosh

Abstract:

DOA technique is a localization technique used in the communication field. Various algorithms have been developed for direction of arrival estimation like MUSIC, ROOT MUSIC, etc. These algorithms depend on various parameters like antenna array elements, number of snapshots and various others. Basically the MUSIC spectrum is evaluated and peaks obtained are considered as the angle of arrivals. The angles evaluated using this process depends on the scanning interval chosen. The accuracy of the results obtained depends on the coarseness of the interval chosen. In this paper, golden section search is applied to the MUSIC algorithm and therefore, more accurate results are achieved. Initially the coarse DOA estimations is done using the MUSIC algorithm in the range -90 to 90 degree at the interval of 10 degree. After the peaks obtained then fine DOA estimation is done using golden section search. Also, the partitioning method is applied to estimate the number of signals incident on the antenna array. Dependency of the algorithm on the number of snapshots is also being explained. Hence, the accurate results are being determined using this algorithm.

Keywords: Direction of Arrival (DOA), golden section search, MUSIC, number of snapshots

Procedia PDF Downloads 434
1714 Computer Aided Diagnostic System for Detection and Classification of a Brain Tumor through MRI Using Level Set Based Segmentation Technique and ANN Classifier

Authors: Atanu K Samanta, Asim Ali Khan

Abstract:

Due to the acquisition of huge amounts of brain tumor magnetic resonance images (MRI) in clinics, it is very difficult for radiologists to manually interpret and segment these images within a reasonable span of time. Computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of radiologists and reduce the time required for accurate diagnosis. An intelligent computer-aided technique for automatic detection of a brain tumor through MRI is presented in this paper. The technique uses the following computational methods; the Level Set for segmentation of a brain tumor from other brain parts, extraction of features from this segmented tumor portion using gray level co-occurrence Matrix (GLCM), and the Artificial Neural Network (ANN) to classify brain tumor images according to their respective types. The entire work is carried out on 50 images having five types of brain tumor. The overall classification accuracy using this method is found to be 98% which is significantly good.

Keywords: brain tumor, computer-aided diagnostic (CAD) system, gray-level co-occurrence matrix (GLCM), tumor segmentation, level set method

Procedia PDF Downloads 495
1713 Artificial Intelligence and Governance in Relevance to Satellites in Space

Authors: Anwesha Pathak

Abstract:

With the increasing number of satellites and space debris, space traffic management (STM) becomes crucial. AI can aid in STM by predicting and preventing potential collisions, optimizing satellite trajectories, and managing orbital slots. Governance frameworks need to address the integration of AI algorithms in STM to ensure safe and sustainable satellite activities. AI and governance play significant roles in the context of satellite activities in space. Artificial intelligence (AI) technologies, such as machine learning and computer vision, can be utilized to process vast amounts of data received from satellites. AI algorithms can analyse satellite imagery, detect patterns, and extract valuable information for applications like weather forecasting, urban planning, agriculture, disaster management, and environmental monitoring. AI can assist in automating and optimizing satellite operations. Autonomous decision-making systems can be developed using AI to handle routine tasks like orbit control, collision avoidance, and antenna pointing. These systems can improve efficiency, reduce human error, and enable real-time responsiveness in satellite operations. AI technologies can be leveraged to enhance the security of satellite systems. AI algorithms can analyze satellite telemetry data to detect anomalies, identify potential cyber threats, and mitigate vulnerabilities. Governance frameworks should encompass regulations and standards for securing satellite systems against cyberattacks and ensuring data privacy. AI can optimize resource allocation and utilization in satellite constellations. By analyzing user demands, traffic patterns, and satellite performance data, AI algorithms can dynamically adjust the deployment and routing of satellites to maximize coverage and minimize latency. Governance frameworks need to address fair and efficient resource allocation among satellite operators to avoid monopolistic practices. Satellite activities involve multiple countries and organizations. Governance frameworks should encourage international cooperation, information sharing, and standardization to address common challenges, ensure interoperability, and prevent conflicts. AI can facilitate cross-border collaborations by providing data analytics and decision support tools for shared satellite missions and data sharing initiatives. AI and governance are critical aspects of satellite activities in space. They enable efficient and secure operations, ensure responsible and ethical use of AI technologies, and promote international cooperation for the benefit of all stakeholders involved in the satellite industry.

Keywords: satellite, space debris, traffic, threats, cyber security.

Procedia PDF Downloads 55
1712 The Relationship between the Environmental and Financial Performance of Australian Electricity Producers

Authors: S. Forughi, A. De Zoysa, S. Bhati

Abstract:

The present study focuses on the environmental performance of the companies in the electricity-producing sector and its relationship with their financial performance. We will review the major studies that examined the relationship between the environmental and financial performance of firms in various industries. While the classical economic debates consider the environmental friendly activities costly and harmful to a firm’s profitability, it is claimed that firms will be rewarded with higher profitability in long run through the investments in environmental friendly activities. In this context, prior studies have examined the relationship between the environmental and financial performance of firms operating in different industry sectors. Our study will employ an environmental indicator to increase the accuracy of the results and be employed as an independent variable in our developed econometric model to evaluate the impact of the financial performance of the firms on their environmental friendly activities in the context of companies operating in the Australian electricity-producing sector. As a result, we expect our methodology to contribute to the literature and the findings of the study will help us to provide recommendations and policy implications to the electricity producers.

Keywords: Australian electricity sector, efficiency measurement, environmental-financial performance interaction, environmental index

Procedia PDF Downloads 309
1711 Cooperative Sensing for Wireless Sensor Networks

Authors: Julien Romieux, Fabio Verdicchio

Abstract:

Wireless Sensor Networks (WSNs), which sense environmental data with battery-powered nodes, require multi-hop communication. This power-demanding task adds an extra workload that is unfairly distributed across the network. As a result, nodes run out of battery at different times: this requires an impractical individual node maintenance scheme. Therefore we investigate a new Cooperative Sensing approach that extends the WSN operational life and allows a more practical network maintenance scheme (where all nodes deplete their batteries almost at the same time). We propose a novel cooperative algorithm that derives a piecewise representation of the sensed signal while controlling approximation accuracy. Simulations show that our algorithm increases WSN operational life and spreads communication workload evenly. Results convey a counterintuitive conclusion: distributing workload fairly amongst nodes may not decrease the network power consumption and yet extend the WSN operational life. This is achieved as our cooperative approach decreases the workload of the most burdened cluster in the network.

Keywords: cooperative signal processing, signal representation and approximation, power management, wireless sensor networks

Procedia PDF Downloads 377
1710 Path Planning for Multiple Unmanned Aerial Vehicles Based on Adaptive Probabilistic Sampling Algorithm

Authors: Long Cheng, Tong He, Iraj Mantegh, Wen-Fang Xie

Abstract:

Path planning is essential for UAVs (Unmanned Aerial Vehicle) with autonomous navigation in unknown environments. In this paper, an adaptive probabilistic sampling algorithm is proposed for the GPS-denied environment, which can be utilized for autonomous navigation system of multiple UAVs in a dynamically-changing structured environment. This method can be used for Unmanned Aircraft Systems Traffic Management (UTM) solutions and in autonomous urban aerial mobility, where a number of platforms are expected to share the airspace. A path network is initially built off line based on available environment map, and on-board sensors systems on the flying UAVs are used for continuous situational awareness and to inform the changes in the path network. Simulation results based on MATLAB and Gazebo in different scenarios and algorithms performance measurement show the high efficiency and accuracy of the proposed technique in unknown environments.

Keywords: path planning, adaptive probabilistic sampling, obstacle avoidance, multiple unmanned aerial vehicles, unknown environments

Procedia PDF Downloads 136
1709 Simulation of Wind Generator with Fixed Wind Turbine under Matlab-Simulink

Authors: Mahdi Motahari, Mojtaba Farzaneh, Armin Parsian Nejad

Abstract:

The rapidly growing wind industry is highly expressing the need for education and training worldwide, particularly on the system level. Modelling and simulating wind generator system using Matlab-Simulink provides expert help in understanding wind systems engineering and system design. Working under Matlab-Simulink we present the integration of the developed WECS model with public electrical grid. A test of the calculated power and Cp related to the experimental equivalent data, using statistical analysis is performed. The statistical indicators of accuracy show better results of the presented method with RMSE: 21%, 22%, MBE : 0.77%, 0.12 % and MAE :3%, 4%.On the other hand we study its behavior when integrated in whole power system. Three level of wind speeds have been chosen: low with 5m/s as the mean value, medium with 8m/s as the mean value and high speed with 12m/s as the mean value. These allowed predicting and supervising the active power produced by the system, characterized respectively by the middle powers of -150 kW, -250kW and -480 kW which will be injected directly into the public electrical grid and the reactive power, characterized respectively by the middle powers of 60 kW, 180 kW and 320 kW and will be consumed by the wind generator.

Keywords: modelling, simulation, wind generator, fixed speed wind turbine, Matlab-Simulink

Procedia PDF Downloads 611
1708 Comparative Analysis of Predictive Models for Customer Churn Prediction in the Telecommunication Industry

Authors: Deepika Christopher, Garima Anand

Abstract:

To determine the best model for churn prediction in the telecom industry, this paper compares 11 machine learning algorithms, namely Logistic Regression, Support Vector Machine, Random Forest, Decision Tree, XGBoost, LightGBM, Cat Boost, AdaBoost, Extra Trees, Deep Neural Network, and Hybrid Model (MLPClassifier). It also aims to pinpoint the top three factors that lead to customer churn and conducts customer segmentation to identify vulnerable groups. According to the data, the Logistic Regression model performs the best, with an F1 score of 0.6215, 81.76% accuracy, 68.95% precision, and 56.57% recall. The top three attributes that cause churn are found to be tenure, Internet Service Fiber optic, and Internet Service DSL; conversely, the top three models in this article that perform the best are Logistic Regression, Deep Neural Network, and AdaBoost. The K means algorithm is applied to establish and analyze four different customer clusters. This study has effectively identified customers that are at risk of churn and may be utilized to develop and execute strategies that lower customer attrition.

Keywords: attrition, retention, predictive modeling, customer segmentation, telecommunications

Procedia PDF Downloads 41
1707 A Deep Learning-Based Pedestrian Trajectory Prediction Algorithm

Authors: Haozhe Xiang

Abstract:

With the rise of the Internet of Things era, intelligent products are gradually integrating into people's lives. Pedestrian trajectory prediction has become a key issue, which is crucial for the motion path planning of intelligent agents such as autonomous vehicles, robots, and drones. In the current technological context, deep learning technology is becoming increasingly sophisticated and gradually replacing traditional models. The pedestrian trajectory prediction algorithm combining neural networks and attention mechanisms has significantly improved prediction accuracy. Based on in-depth research on deep learning and pedestrian trajectory prediction algorithms, this article focuses on physical environment modeling and learning of historical trajectory time dependence. At the same time, social interaction between pedestrians and scene interaction between pedestrians and the environment were handled. An improved pedestrian trajectory prediction algorithm is proposed by analyzing the existing model architecture. With the help of these improvements, acceptable predicted trajectories were successfully obtained. Experiments on public datasets have demonstrated the algorithm's effectiveness and achieved acceptable results.

Keywords: deep learning, graph convolutional network, attention mechanism, LSTM

Procedia PDF Downloads 49
1706 A Greedy Alignment Algorithm Supporting Medication Reconciliation

Authors: David Tresner-Kirsch

Abstract:

Reconciling patient medication lists from multiple sources is a critical task supporting the safe delivery of patient care. Manual reconciliation is a time-consuming and error-prone process, and recently attempts have been made to develop efficiency- and safety-oriented automated support for professionals performing the task. An important capability of any such support system is automated alignment – finding which medications from a list correspond to which medications from a different source, regardless of misspellings, naming differences (e.g. brand name vs. generic), or changes in treatment (e.g. switching a patient from one antidepressant class to another). This work describes a new algorithmic solution to this alignment task, using a greedy matching approach based on string similarity, edit distances, concept extraction and normalization, and synonym search derived from the RxNorm nomenclature. The accuracy of this algorithm was evaluated against a gold-standard corpus of 681 medication records; this evaluation found that the algorithm predicted alignments with 99% precision and 91% recall. This performance is sufficient to support decision support applications for medication reconciliation.

Keywords: clinical decision support, medication reconciliation, natural language processing, RxNorm

Procedia PDF Downloads 270
1705 Time-Frequency Feature Extraction Method Based on Micro-Doppler Signature of Ground Moving Targets

Authors: Ke Ren, Huiruo Shi, Linsen Li, Baoshuai Wang, Yu Zhou

Abstract:

Since some discriminative features are required for ground moving targets classification, we propose a new feature extraction method based on micro-Doppler signature. Firstly, the time-frequency analysis of measured data indicates that the time-frequency spectrograms of the three kinds of ground moving targets, i.e., single walking person, two people walking and a moving wheeled vehicle, are discriminative. Then, a three-dimensional time-frequency feature vector is extracted from the time-frequency spectrograms to depict these differences. At last, a Support Vector Machine (SVM) classifier is trained with the proposed three-dimensional feature vector. The classification accuracy to categorize ground moving targets into the three kinds of the measured data is found to be over 96%, which demonstrates the good discriminative ability of the proposed micro-Doppler feature.

Keywords: micro-doppler, time-frequency analysis, feature extraction, radar target classification

Procedia PDF Downloads 395
1704 Application of Two Stages Adaptive Neuro-Fuzzy Inference System to Improve Dissolved Gas Analysis Interpretation Techniques

Authors: Kharisma Utomo Mulyodinoto, Suwarno, A. Abu-Siada

Abstract:

Dissolved Gas Analysis is one of impressive technique to detect and predict internal fault of transformers by using gas generated by transformer oil sample. A number of methods are used to interpret the dissolved gas from transformer oil sample: Doernenberg Ratio Method, IEC (International Electrotechnical Commission) Ratio Method, and Duval Triangle Method. While the assessment of dissolved gas within transformer oil samples has been standardized over the past two decades, analysis of the results is not always straight forward as it depends on personnel expertise more than mathematical formulas. To get over this limitation, this paper is aimed at improving the interpretation of Doernenberg Ratio Method, IEC Ratio Method, and Duval Triangle Method using Two Stages Adaptive Neuro-Fuzzy Inference System (ANFIS). Dissolved gas analysis data from 520 faulty transformers was analyzed to establish the proposed ANFIS model. Results show that the developed ANFIS model is accurate and can standardize the dissolved gas interpretation process with accuracy higher than 90%.

Keywords: ANFIS, dissolved gas analysis, Doernenberg ratio method, Duval triangular method, IEC ratio method, transformer

Procedia PDF Downloads 135
1703 Rising of Single and Double Bubbles during Boiling and Effect of Electric Field in This Process

Authors: Masoud Gholam Ale Mohammad, Mojtaba Hafezi Birgani

Abstract:

An experimental study of saturated pool boiling on a single artificial nucleation site without and with the application of an electric field on the boiling surface has been conducted. N-pentane is boiling on a copper surface and is recorded with a high speed camera providing high quality pictures and movies. The accuracy of the visualization allowed establishing an experimental bubble growth law from a large number of experiments. This law shows that the evaporation rate is decreasing during the bubble growth, and underlines the importance of liquid motion induced by the preceding bubble. Bubble rise is therefore studied: once detached, bubbles accelerate vertically until reaching a maximum velocity in good agreement with a correlation from literature. The bubbles then turn to another direction. The effect of applying an electric field on the boiling surface in finally studied. In addition to changes in the bubble shape, changes are also shown in the liquid plume and the convective structures above the surface. Lower maximum rising velocities were measured in the presence of electric fields, especially with a negative polarity.

Keywords: single and double bubbles, electric field, boiling, rising

Procedia PDF Downloads 216
1702 Comparison of Different Intraocular Lens Power Calculation Formulas in People With Very High Myopia

Authors: Xia Chen, Yulan Wang

Abstract:

purpose: To compare the accuracy of Haigis, SRK/T, T2, Holladay 1, Hoffer Q, Barrett Universal II, Emmetropia Verifying Optical (EVO) and Kane for intraocular lens power calculation in patients with axial length (AL) ≥ 28 mm. Methods: In this retrospective single-center study, 50 eyes of 41 patients with AL ≥ 28 mm that underwent uneventful cataract surgery were enrolled. The actual postoperative refractive results were compared to the predicted refraction calculated with different formulas (Haigis, SRK/T, T2, Holladay 1, Hoffer Q, Barrett Universal II, EVO and Kane). The mean absolute prediction errors (MAE) 1 month postoperatively were compared. Results: The MAE of different formulas were as follows: Haigis (0.509), SRK/T (0.705), T2 (0.999), Holladay 1 (0.714), Hoffer Q (0.583), Barrett Universal II (0.552), EVO (0.463) and Kane (0.441). No significant difference was found among the different formulas (P = .122). The Kane and EVO formulas achieved the lowest level of mean prediction error (PE) and median absolute error (MedAE) (p < 0.05). Conclusion: The Kane and EVO formulas had a better success rate than others in predicting IOL power in high myopic eyes with AL longer than 28 mm in this study.

Keywords: cataract, power calculation formulas, intraocular lens, long axial length

Procedia PDF Downloads 54
1701 Statistical Analysis of Surface Roughness and Tool Life Using (RSM) in Face Milling

Authors: Mohieddine Benghersallah, Lakhdar Boulanouar, Salim Belhadi

Abstract:

Currently, higher production rate with required quality and low cost is the basic principle in the competitive manufacturing industry. This is mainly achieved by using high cutting speed and feed rates. Elevated temperatures in the cutting zone under these conditions shorten tool life and adversely affect the dimensional accuracy and surface integrity of component. Thus it is necessary to find optimum cutting conditions (cutting speed, feed rate, machining environment, tool material and geometry) that can produce components in accordance with the project and having a relatively high production rate. Response surface methodology is a collection of mathematical and statistical techniques that are useful for modelling and analysis of problems in which a response of interest is influenced by several variables and the objective is to optimize this response. The work presented in this paper examines the effects of cutting parameters (cutting speed, feed rate and depth of cut) on to the surface roughness through the mathematical model developed by using the data gathered from a series of milling experiments performed.

Keywords: Statistical analysis (RSM), Bearing steel, Coating inserts, Tool life, Surface Roughness, End milling.

Procedia PDF Downloads 415
1700 South Atlantic Architects Validation of the Construction Decision Making Inventory

Authors: Tulio Sulbaran, Sandeep Langar

Abstract:

Architects are an integral part of the construction industry and are continuously incorporating decisions that influence projects during their life cycle. These decisions aim at selecting best alternative from the ones available. Unfortunately, this decision making process is mainly unexplored in the construction industry. No instrument to measure construction decision, based on knowledgebase of decision-makers, has existed. Additionally, limited literature is available on the topic. Recently, an instrument to gain an understanding of the construction decision-making process was developed by Dr. Tulio Sulbaran from the University of Texas, San Antonio. The instrument’s name is 'Construction Decision Making Inventory (CDMI)'. The CDMI is an innovative idea to measure the 'What? When? How? Moreover, Who?' of the construction decision-making process. As an innovative idea, its statistical validity (accuracy of the assessment) is yet to be assessed. Thus, the purpose of this paper is to describe the results of a case study with architects in the south-east of the United States aimed to determine the CDMI validity. The results of the case study are important because they assess the validity of the tool. Furthermore, as the architects evaluated each question within the measurements, this study is also guiding the enhancement of the CDMI.

Keywords: decision, support, inventory, architect

Procedia PDF Downloads 311
1699 A Study on Manufacturing of Head-Part of Pipes Using a Rotating Manufacturing Process

Authors: J. H. Park, S. K. Lee, Y. W. Kim, D. C. Ko

Abstract:

A large variety of pipe flange is required in marine and construction industry.Pipe flanges are usually welded or screwed to the pipe end and are connected with bolts.This approach is very simple and widely used for a long time, however, it results in high development cost and low productivity, and the productions made by this approach usually have safety problem at the welding area.In this research, a new approach of forming pipe flange based on cold forging and floating die concept is presented.This innovative approach increases the effectiveness of the material usage and save the time cost compared with conventional welding method. To ensure the dimensional accuracy of the final product, the finite element analysis (FEA) was carried out to simulate the process of cold forging, and the orthogonal experiment methods were used to investigate the influence of four manufacturing factors (pin die angle, pipe flange angle, rpm, pin die distance from clamp jig) and predicted the best combination of them. The manufacturing factors were obtained by numerical and experimental studies and it shows that the approach is very useful and effective for the forming of pipe flange, and can be widely used later.

Keywords: cold forging, FEA (finite element analysis), forge-3D, rotating forming, tubes

Procedia PDF Downloads 365
1698 Shotcrete Performance Optimisation and Audit Using 3D Laser Scanning

Authors: Carlos Gonzalez, Neil Slatcher, Marcus Properzi, Kan Seah

Abstract:

In many underground mining operations, shotcrete is used for permanent rock support. Shotcrete thickness is a critical measure of the success of this process. 3D Laser Mapping, in conjunction with Jetcrete, has developed a 3D laser scanning system specifically for measuring the thickness of shotcrete. The system is mounted on the shotcrete spraying machine and measures the rock faces before and after spraying. The calculated difference between the two 3D surface models is measured as the thickness of the sprayed concrete. Typical work patterns for the shotcrete process required a rapid and automatic system. The scanning takes place immediately before and after the application of the shotcrete so no convergence takes place in the interval between scans. Automatic alignment of scans without targets was implemented which allows for the possibility of movement of the spraying machine between scans. Case studies are presented where accuracy tests are undertaken and automatic audit reports are calculated. The use of 3D imaging data for the calculation of shotcrete thickness is an important tool for geotechnical engineers and contract managers, and this could become the new state-of-the-art methodology for the mining industry.

Keywords: 3D imaging, shotcrete, surface model, tunnel stability

Procedia PDF Downloads 279
1697 Comparison of Statistical Methods for Estimating Missing Precipitation Data in the River Subbasin Lenguazaque, Colombia

Authors: Miguel Cañon, Darwin Mena, Ivan Cabeza

Abstract:

In this work was compared and evaluated the applicability of statistical methods for the estimation of missing precipitations data in the basin of the river Lenguazaque located in the departments of Cundinamarca and Boyacá, Colombia. The methods used were the method of simple linear regression, distance rate, local averages, mean rates, correlation with nearly stations and multiple regression method. The analysis used to determine the effectiveness of the methods is performed by using three statistical tools, the correlation coefficient (r2), standard error of estimation and the test of agreement of Bland and Altmant. The analysis was performed using real rainfall values removed randomly in each of the seasons and then estimated using the methodologies mentioned to complete the missing data values. So it was determined that the methods with the highest performance and accuracy in the estimation of data according to conditions that were counted are the method of multiple regressions with three nearby stations and a random application scheme supported in the precipitation behavior of related data sets.

Keywords: statistical comparison, precipitation data, river subbasin, Bland and Altmant

Procedia PDF Downloads 457
1696 CFD Simulation and Experimental Validation of the Bubble-Induced Flow during Electrochemical Water Splitting

Authors: Gabriel Wosiak, Jeyse da Silva, Sthefany S. Sena, Renato N. de Andrade, Ernesto Pereira

Abstract:

The bubble formation during hydrogen production by electrolysis and several electrochemical processes is an inherent phenomenon and can impact the energy consumption of the processes. In this work, it was reported both experimental and computational results describe the effect of bubble displacement, which, under the cases investigated, leads to the formation of a convective flow in the solution. The process is self-sustained, and a solution vortex is formed, which modifies the bubble growth and covering at the electrode surface. Using the experimental data, we have built a model to simulate it, which, with high accuracy, describes the phenomena. Then, it simulated many different experimental conditions and evaluated the effects of the boundary conditions on the bubble surface covering the surface. We have observed a position-dependent bubble covering the surface, which has an effect on the water-splitting efficiency. It was shown that the bubble covering is not uniform at the electrode surface, and using statistical analysis; it was possible to evaluate the influence of the gas type (H2 and O2), current density, and the bubble size (and cross-effects) on the covering fraction and the asymmetric behavior over the electrode surface.

Keywords: water splitting, bubble, electrolysis, hydrogen production

Procedia PDF Downloads 84
1695 Wind Wave Modeling Using MIKE 21 SW Spectral Model

Authors: Pouya Molana, Zeinab Alimohammadi

Abstract:

Determining wind wave characteristics is essential for implementing projects related to Coastal and Marine engineering such as designing coastal and marine structures, estimating sediment transport rates and coastal erosion rates in order to predict significant wave height (H_s), this study applies the third generation spectral wave model, Mike 21 SW, along with CEM model. For SW model calibration and verification, two data sets of meteorology and wave spectroscopy are used. The model was exposed to time-varying wind power and the results showed that difference ratio mean, standard deviation of difference ratio and correlation coefficient in SW model for H_s parameter are 1.102, 0.279 and 0.983, respectively. Whereas, the difference ratio mean, standard deviation and correlation coefficient in The Choice Experiment Method (CEM) for the same parameter are 0.869, 1.317 and 0.8359, respectively. Comparing these expected results it is revealed that the Choice Experiment Method CEM has more errors in comparison to MIKE 21 SW third generation spectral wave model and higher correlation coefficient does not necessarily mean higher accuracy.

Keywords: MIKE 21 SW, CEM method, significant wave height, difference ratio

Procedia PDF Downloads 387
1694 Hyperspectral Data Classification Algorithm Based on the Deep Belief and Self-Organizing Neural Network

Authors: Li Qingjian, Li Ke, He Chun, Huang Yong

Abstract:

In this paper, the method of combining the Pohl Seidman's deep belief network with the self-organizing neural network is proposed to classify the target. This method is mainly aimed at the high nonlinearity of the hyperspectral image, the high sample dimension and the difficulty in designing the classifier. The main feature of original data is extracted by deep belief network. In the process of extracting features, adding known labels samples to fine tune the network, enriching the main characteristics. Then, the extracted feature vectors are classified into the self-organizing neural network. This method can effectively reduce the dimensions of data in the spectrum dimension in the preservation of large amounts of raw data information, to solve the traditional clustering and the long training time when labeled samples less deep learning algorithm for training problems, improve the classification accuracy and robustness. Through the data simulation, the results show that the proposed network structure can get a higher classification precision in the case of a small number of known label samples.

Keywords: DBN, SOM, pattern classification, hyperspectral, data compression

Procedia PDF Downloads 328
1693 Recognition of Gene Names from Gene Pathway Figures Using Siamese Network

Authors: Muhammad Azam, Micheal Olaolu Arowolo, Fei He, Mihail Popescu, Dong Xu

Abstract:

The number of biological papers is growing quickly, which means that the number of biological pathway figures in those papers is also increasing quickly. Each pathway figure shows extensive biological information, like the names of genes and how the genes are related. However, manually annotating pathway figures takes a lot of time and work. Even though using advanced image understanding models could speed up the process of curation, these models still need to be made more accurate. To improve gene name recognition from pathway figures, we applied a Siamese network to map image segments to a library of pictures containing known genes in a similar way to person recognition from photos in many photo applications. We used a triple loss function and a triplet spatial pyramid pooling network by combining the triplet convolution neural network and the spatial pyramid pooling (TSPP-Net). We compared VGG19 and VGG16 as the Siamese network model. VGG16 achieved better performance with an accuracy of 93%, which is much higher than OCR results.

Keywords: biological pathway, image understanding, gene name recognition, object detection, Siamese network, VGG

Procedia PDF Downloads 268
1692 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network

Authors: Abdolreza Memari

Abstract:

In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.

Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model

Procedia PDF Downloads 486
1691 Assessment of a Rapid Detection Sensor of Faecal Pollution in Freshwater

Authors: Ciprian Briciu-Burghina, Brendan Heery, Dermot Brabazon, Fiona Regan

Abstract:

Good quality bathing water is a highly desirable natural resource which can provide major economic, social, and environmental benefits. Both in Ireland and Europe, such water bodies are managed under the European Directive for the management of bathing water quality (BWD). The BWD aims mainly: (i) to improve health protection for bathers by introducing stricter standards for faecal pollution assessment (E. coli, enterococci), (ii) to establish a more pro-active approach to the assessment of possible pollution risks and the management of bathing waters, and (iii) to increase public involvement and dissemination of information to the general public. Standard methods for E. coli and enterococci quantification rely on cultivation of the target organism which requires long incubation periods (from 18h to a few days). This is not ideal when immediate action is required for risk mitigation. Municipalities that oversee the bathing water quality and deploy appropriate signage have to wait for laboratory results. During this time, bathers can be exposed to pollution events and health risks. Although forecasting tools exist, they are site specific and as consequence extensive historical data is required to be effective. Another approach for early detection of faecal pollution is the use of marker enzymes. β-glucuronidase (GUS) is a widely accepted biomarker for E. coli detection in microbiological water quality control. GUS assay is particularly attractive as they are rapid, less than 4 h, easy to perform and they do not require specialised training. A method for on-site detection of GUS from environmental samples in less than 75 min was previously demonstrated. In this study, the capability of ColiSense as an early warning system for faecal pollution in freshwater is assessed. The system successfully detected GUS activity in all of the 45 freshwater samples tested. GUS activity was found to correlate linearly with E. coli (r2=0.53, N=45, p < 0.001) and enterococci (r2=0.66, N=45, p < 0.001) Although GUS is a marker for E. coli, a better correlation was obtained for enterococci. For this study water samples were collected from 5 rivers in the Dublin area over 1 month. This suggests a high diversity of pollution sources (agricultural, industrial, etc) as well as point and diffuse pollution sources were captured in the sample size. Such variety in the source of E. coli can account for different GUS activities/culturable cell and different ratios of viable but not culturable to viable culturable bacteria. A previously developed protocol for the recovery and detection of E. coli was coupled with a miniaturised fluorometer (ColiSense) and the system was assessed for the rapid detection FIB in freshwater samples. Further work will be carried out to evaluate the system’s performance on seawater samples.

Keywords: faecal pollution, β-glucuronidase (GUS), bathing water, E. coli

Procedia PDF Downloads 269