Search results for: quantification accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4198

Search results for: quantification accuracy

2818 Using Probe Person Data for Travel Mode Detection

Authors: Muhammad Awais Shafique, Eiji Hato, Hideki Yaginuma

Abstract:

Recently GPS data is used in a lot of studies to automatically reconstruct travel patterns for trip survey. The aim is to minimize the use of questionnaire surveys and travel diaries so as to reduce their negative effects. In this paper data acquired from GPS and accelerometer embedded in smart phones is utilized to predict the mode of transportation used by the phone carrier. For prediction, Support Vector Machine (SVM) and Adaptive boosting (AdaBoost) are employed. Moreover a unique method to improve the prediction results from these algorithms is also proposed. Results suggest that the prediction accuracy of AdaBoost after improvement is relatively better than the rest.

Keywords: accelerometer, AdaBoost, GPS, mode prediction, support vector machine

Procedia PDF Downloads 359
2817 Numerical Simulation for Self-Loosening Phenomenon Analysis of Bolt Joint under Vibration

Authors: Long Kim Vu, Ban Dang Nguyen

Abstract:

In this paper, the finite element method (FEM) is utilized to simulate the comprehensive process including tightening, releasing and self-loosening of a bolt joint under transverse vibration. Following to the accurate geometry of helical threads, an absolutely hexahedral meshing is implemented. The accuracy of simulation process is verified and validated by comparison with the experimental results on clamping force-vibration relationship, which shows the sufficient correlation. Further analysis with different amplitude and frequency of transverse vibration is done to determine the dominant factor inducing the failure.

Keywords: bolt self-loosening, contact state, finite element method, FEM, helical thread modeling

Procedia PDF Downloads 202
2816 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.

Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival

Procedia PDF Downloads 335
2815 Comparing Two Unmanned Aerial Systems in Determining Elevation at the Field Scale

Authors: Brock Buckingham, Zhe Lin, Wenxuan Guo

Abstract:

Accurate elevation data is critical in deriving topographic attributes for the precision management of crop inputs, especially water and nutrients. Traditional ground-based elevation data acquisition is time consuming, labor intensive, and often inconvenient at the field scale. Various unmanned aerial systems (UAS) provide the capability of generating digital elevation data from high-resolution images. The objective of this study was to compare the performance of two UAS with different global positioning system (GPS) receivers in determining elevation at the field scale. A DJI Phantom 4 Pro and a DJI Phantom 4 RTK(real-time kinematic) were applied to acquire images at three heights, including 40m, 80m, and 120m above ground. Forty ground control panels were placed in the field, and their geographic coordinates were determined using an RTK GPS survey unit. For each image acquisition using a UAS at a particular height, two elevation datasets were generated using the Pix4D stitching software: a calibrated dataset using the surveyed coordinates of the ground control panels and an uncalibrated dataset without using the surveyed coordinates of the ground control panels. Elevation values for each panel derived from the elevation model of each dataset were compared to the corresponding coordinates of the ground control panels. The coefficient of the determination (R²) and the root mean squared error (RMSE) were used as evaluation metrics to assess the performance of each image acquisition scenario. RMSE values for the uncalibrated elevation dataset were 26.613 m, 31.141 m, and 25.135 m for images acquired at 120 m, 80 m, and 40 m, respectively, using the Phantom 4 Pro UAS. With calibration for the same UAS, the accuracies were significantly improved with RMSE values of 0.161 m, 0.165, and 0.030 m, respectively. The best results showed an RMSE of 0.032 m and an R² of 0.998 for calibrated dataset generated using the Phantom 4 RTK UAS at 40m height. The accuracy of elevation determination decreased as the flight height increased for both UAS, with RMSE values greater than 0.160 m for the datasets acquired at 80 m and 160 m. The results of this study show that calibration with ground control panels improves the accuracy of elevation determination, especially for the UAS with a regular GPS receiver. The Phantom 4 Pro provides accurate elevation data with substantial surveyed ground control panels for the 40 m dataset. The Phantom 4 Pro RTK UAS provides accurate elevation at 40 m without calibration for practical precision agriculture applications. This study provides valuable information on selecting appropriate UAS and flight heights in determining elevation for precision agriculture applications.

Keywords: unmanned aerial system, elevation, precision agriculture, real-time kinematic (RTK)

Procedia PDF Downloads 164
2814 Performance Evaluation of Arrival Time Prediction Models

Authors: Bin Li, Mei Liu

Abstract:

Arrival time information is a crucial component of advanced public transport system (APTS). The advertisement of arrival time at stops can help reduce the waiting time and anxiety of passengers, and improve the quality of service. In this research, an experiment was conducted to compare the performance on prediction accuracy and precision between the link-based and the path-based historical travel time based model with the automatic vehicle location (AVL) data collected from an actual bus route. The research results show that the path-based model is superior to the link-based model, and achieves the best improvement on peak hours.

Keywords: bus transit, arrival time prediction, link-based, path-based

Procedia PDF Downloads 359
2813 Cost Overrun Causes in Public Construction Projects in Saudi Arabia

Authors: Ibrahim Mahamid, A. Al-Ghonamy, M. Aichouni

Abstract:

This study is conducted to identify causes of cost deviations in public construction projects in Saudi Arabia from contractors’ perspective. 41 factors that might affect cost estimating accuracy were identified through literature review and discussion with some construction experts. The factors were tabulated in a questionnaire form and a field survey included 51 contractors from the Northern Province of Saudi Arabia was performed. The results show that the top five important causes are: wrong estimation method, long period between design and time of implementation, cost of labor, cost of machinary and absence of construction-cost data.

Keywords: cost deviation, public construction, cost estimating, Saudi Arabia, contractors

Procedia PDF Downloads 475
2812 Human Identification Using Local Roughness Patterns in Heartbeat Signal

Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori

Abstract:

Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.

Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification

Procedia PDF Downloads 404
2811 A Trapezoidal-Like Integrator for the Numerical Solution of One-Dimensional Time Dependent Schrödinger Equation

Authors: Johnson Oladele Fatokun, I. P. Akpan

Abstract:

In this paper, the one-dimensional time dependent Schrödinger equation is discretized by the method of lines using a second order finite difference approximation to replace the second order spatial derivative. The evolving system of stiff ordinary differential equation (ODE) in time is solved numerically by an L-stable trapezoidal-like integrator. Results show accuracy of relative maximum error of order 10-4 in the interval of consideration. The performance of the method as compared to an existing scheme is considered favorable.

Keywords: Schrodinger’s equation, partial differential equations, method of lines (MOL), stiff ODE, trapezoidal-like integrator

Procedia PDF Downloads 418
2810 Unsupervised Reciter Recognition Using Gaussian Mixture Models

Authors: Ahmad Alwosheel, Ahmed Alqaraawi

Abstract:

This work proposes an unsupervised text-independent probabilistic approach to recognize Quran reciter voice. It is an accurate approach that works on real time applications. This approach does not require a prior information about reciter models. It has two phases, where in the training phase the reciters' acoustical features are modeled using Gaussian Mixture Models, while in the testing phase, unlabeled reciter's acoustical features are examined among GMM models. Using this approach, a high accuracy results are achieved with efficient computation time process.

Keywords: Quran, speaker recognition, reciter recognition, Gaussian Mixture Model

Procedia PDF Downloads 380
2809 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada

Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman

Abstract:

Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.

Keywords: HAND, DTM, rapid floodplain, simplified conceptual models

Procedia PDF Downloads 151
2808 Enhanced Face Recognition with Daisy Descriptors Using 1BT Based Registration

Authors: Sevil Igit, Merve Meric, Sarp Erturk

Abstract:

In this paper, it is proposed to improve Daisy descriptor based face recognition using a novel One-Bit Transform (1BT) based pre-registration approach. The 1BT based pre-registration procedure is fast and has low computational complexity. It is shown that the face recognition accuracy is improved with the proposed approach. The proposed approach can facilitate highly accurate face recognition using DAISY descriptor with simple matching and thereby facilitate a low-complexity approach.

Keywords: face recognition, Daisy descriptor, One-Bit Transform, image registration

Procedia PDF Downloads 367
2807 Transboundary Pollution after Natural Disasters: Scenario Analyses for Uranium at Kyrgyzstan-Uzbekistan Border

Authors: Fengqing Li, Petra Schneider

Abstract:

Failure of tailings management facilities (TMF) of radioactive residues is an enormous challenge worldwide and can result in major catastrophes. Particularly in transboundary regions, such failure is most likely to lead to international conflict. This risk occurs in Kyrgyzstan and Uzbekistan, where the current major challenge is the quantification of impacts due to pollution from uranium legacy sites and especially the impact on river basins after natural hazards (i.e., landslides). By means of GoldSim, a probabilistic simulation model, the amount of tailing material that flows into the river networks of Mailuu Suu in Kyrgyzstan after pond failure was simulated for three scenarios, namely 10%, 20%, and 30% of material inputs. Based on Muskingum-Cunge flood routing procedure, the peak value of uranium flood wave along the river network was simulated. Among the 23 TMF, 19 ponds are close to the river networks. The spatiotemporal distributions of uranium along the river networks were then simulated for all the 19 ponds under three scenarios. Taking the TP7 which is 30 km far from the Kyrgyzstan-Uzbekistan border as one example, the uranium concentration decreased continuously along the longitudinal gradient of the river network, the concentration of uranium was observed at the border after 45 min of the pond failure and the highest value was detected after 69 min. The highest concentration of uranium at the border were 16.5, 33, and 47.5 mg/L under scenarios of 10%, 20%, and 30% of material inputs, respectively. In comparison to the guideline value of uranium in drinking water (i.e., 30 µg/L) provided by the World Health Organization, the observed concentrations of uranium at the border were 550‒1583 times higher. In order to mitigate the transboundary impact of a radioactive pollutant release, an integrated framework consisting of three major strategies were proposed. Among, the short-term strategy can be used in case of emergency event, the medium-term strategy allows both countries handling the TMF efficiently based on the benefit-sharing concept, and the long-term strategy intends to rehabilitate the site through the relocation of all TMF.

Keywords: Central Asia, contaminant transport modelling, radioactive residue, transboundary conflict

Procedia PDF Downloads 118
2806 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture

Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger

Abstract:

3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.

Keywords: 3D woven composites, compression, preforms, textile composites

Procedia PDF Downloads 135
2805 A Fast, Reliable Technique for Face Recognition Based on Hidden Markov Model

Authors: Sameh Abaza, Mohamed Ibrahim, Tarek Mahmoud

Abstract:

Due to the development in the digital image processing, its wide use in many applications such as medical, security, and others, the need for more accurate techniques that are reliable, fast and robust is vehemently demanded. In the field of security, in particular, speed is of the essence. In this paper, a pattern recognition technique that is based on the use of Hidden Markov Model (HMM), K-means and the Sobel operator method is developed. The proposed technique is proved to be fast with respect to some other techniques that are investigated for comparison. Moreover, it shows its capability of recognizing the normal face (center part) as well as face boundary.

Keywords: HMM, K-Means, Sobel, accuracy, face recognition

Procedia PDF Downloads 332
2804 Engineering of Reagentless Fluorescence Biosensors Based on Single-Chain Antibody Fragments

Authors: Christian Fercher, Jiaul Islam, Simon R. Corrie

Abstract:

Fluorescence-based immunodiagnostics are an emerging field in biosensor development and exhibit several advantages over traditional detection methods. While various affinity biosensors have been developed to generate a fluorescence signal upon sensing varying concentrations of analytes, reagentless, reversible, and continuous monitoring of complex biological samples remains challenging. Here, we aimed to genetically engineer biosensors based on single-chain antibody fragments (scFv) that are site-specifically labeled with environmentally sensitive fluorescent unnatural amino acids (UAA). A rational design approach resulted in quantifiable analyte-dependent changes in peak fluorescence emission wavelength and enabled antigen detection in vitro. Incorporation of a polarity indicator within the topological neighborhood of the antigen-binding interface generated a titratable wavelength blueshift with nanomolar detection limits. In order to ensure continuous analyte monitoring, scFv candidates with fast binding and dissociation kinetics were selected from a genetic library employing a high-throughput phage display and affinity screening approach. Initial rankings were further refined towards rapid dissociation kinetics using bio-layer interferometry (BLI) and surface plasmon resonance (SPR). The most promising candidates were expressed, purified to homogeneity, and tested for their potential to detect biomarkers in a continuous microfluidic-based assay. Variations of dissociation kinetics within an order of magnitude were achieved without compromising the specificity of the antibody fragments. This approach is generally applicable to numerous antibody/antigen combinations and currently awaits integration in a wide range of assay platforms for one-step protein quantification.

Keywords: antibody engineering, biosensor, phage display, unnatural amino acids

Procedia PDF Downloads 146
2803 Urban Flood Risk Mapping–a Review

Authors: Sherly M. A., Subhankar Karmakar, Terence Chan, Christian Rau

Abstract:

Floods are one of the most frequent natural disasters, causing widespread devastation, economic damage and threat to human lives. Hydrologic impacts of climate change and intensification of urbanization are two root causes of increased flood occurrences, and recent research trends are oriented towards understanding these aspects. Due to rapid urbanization, population of cities across the world has increased exponentially leading to improperly planned developments. Climate change due to natural and anthropogenic activities on our environment has resulted in spatiotemporal changes in rainfall patterns. The combined effect of both aggravates the vulnerability of urban populations to floods. In this context, an efficient and effective flood risk management with its core component as flood risk mapping is essential in prevention and mitigation of flood disasters. Urban flood risk mapping involves zoning of an urban region based on its flood risk, which depicts the spatiotemporal pattern of frequency and severity of hazards, exposure to hazards, and degree of vulnerability of the population in terms of socio-economic, environmental and infrastructural aspects. Although vulnerability is a key component of risk, its assessment and mapping is often less advanced than hazard mapping and quantification. A synergic effort from technical experts and social scientists is vital for the effectiveness of flood risk management programs. Despite an increasing volume of quality research conducted on urban flood risk, a comprehensive multidisciplinary approach towards flood risk mapping still remains neglected due to which many of the input parameters and definitions of flood risk concepts are imprecise. Thus, the objectives of this review are to introduce and precisely define the relevant input parameters, concepts and terms in urban flood risk mapping, along with its methodology, current status and limitations. The review also aims at providing thought-provoking insights to potential future researchers and flood management professionals.

Keywords: flood risk, flood hazard, flood vulnerability, flood modeling, urban flooding, urban flood risk mapping

Procedia PDF Downloads 590
2802 Identification and Quantification of Sesquiterpene Lactones of Sagebrush (Artemisia tridentate) and Its Chemical Modification

Authors: Rosemary Anibogwu, Kavita Sharma, Karl De Jesus

Abstract:

Sagebrush is an abundant and naturally occurring plant in the Intermountain West region of the United States. The plant contains an array of bioactive compounds such as flavonoids, terpenoids, sterols, and phenolic acids. It is important to identify and characterize these compounds because Native Americans use sagebrush as herbal medicine. These compounds are also utilized for preventing infection in wounds, treating headaches and colds, and possess antitumor properties. This research is an exploratory study on the sesquiterpene present in the leaves of sagebrush. The leaf foliage was extracted with 100 % chloroform and 100 % methanol. The percentage yield for the crude was considerably higher in chloroform. The Thin Layer Chromatography (TLC) analysis of the crude extracted unveiled a brown band at Rf = 0.25 and a dark brown band at Rf = 0.74, along with three unknown faint bands the 254 nm UV lamp. Furthermore, the two distinct brown (Achillin) and dark brown band (Hydroxyachillin) in TLC were further utilized in the isolation of pure compounds with column chromatography. The structures of Achillin and Hydroxyachillin were elucidated based on extensive spectroscopic analysis, including TLC, High-Performance Liquid Chromatography (HPLC), 1D- and 2D-Nuclear Magnetic Resonance (NMR), and Mass Spectroscopy (MS). The antioxidant activities of crude extract and three pure compounds were evaluated in terms of their peroxyl radical scavenging by Ferric Reducing Ability of Plasma (FRAP) and 1,1-Diphenyl-2-picryl-hydrazyl (DPPH) methods. The crude extract showed the antioxidant activity of 18.99 ± 0.51 µmol TEg -1 FW for FRAP and 11.59 ± 0.38 µmol TEg -1 FW for DPPH. The activities of Achillin, Hydroxyachillin, and Quercetagetin trimethyl ether were 13.03, 15.90 and 14.02 µmol TEg -1 FW respectively for the FRAP assay. The three purified compounds have been submitted to the National Cancer Institute 60 cancer cell line for further study.

Keywords: HPLC, nuclear magnetic resonance spectroscopy, sagebrush, sesquiterpene lactones

Procedia PDF Downloads 131
2801 The Role of Synthetic Data in Aerial Object Detection

Authors: Ava Dodd, Jonathan Adams

Abstract:

The purpose of this study is to explore the characteristics of developing a machine learning application using synthetic data. The study is structured to develop the application for the purpose of deploying the computer vision model. The findings discuss the realities of attempting to develop a computer vision model for practical purpose, and detail the processes, tools, and techniques that were used to meet accuracy requirements. The research reveals that synthetic data represents another variable that can be adjusted to improve the performance of a computer vision model. Further, a suite of tools and tuning recommendations are provided.

Keywords: computer vision, machine learning, synthetic data, YOLOv4

Procedia PDF Downloads 225
2800 Virtual Metrology for Copper Clad Laminate Manufacturing

Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho

Abstract:

In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.

Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology

Procedia PDF Downloads 350
2799 Simulation of 3-D Direction-of-Arrival Estimation Using MUSIC Algorithm

Authors: Duckyong Kim, Jong Kang Park, Jong Tae Kim

Abstract:

DOA (Direction of Arrival) estimation is an important method in array signal processing and has a wide range of applications such as direction finding, beam forming, and so on. In this paper, we briefly introduce the MUSIC (Multiple Signal Classification) Algorithm, one of DOA estimation methods for analyzing several targets. Then we apply the MUSIC algorithm to the two-dimensional antenna array to analyze DOA estimation in 3D space through MATLAB simulation. We also analyze the design factors that can affect the accuracy of DOA estimation through simulation, and proceed with further consideration on how to apply the system.

Keywords: DOA estimation, MUSIC algorithm, spatial spectrum, array signal processing

Procedia PDF Downloads 379
2798 Talent-to-Vec: Using Network Graphs to Validate Models with Data Sparsity

Authors: Shaan Khosla, Jon Krohn

Abstract:

In a recruiting context, machine learning models are valuable for recommendations: to predict the best candidates for a vacancy, to match the best vacancies for a candidate, and compile a set of similar candidates for any given candidate. While useful to create these models, validating their accuracy in a recommendation context is difficult due to a sparsity of data. In this report, we use network graph data to generate useful representations for candidates and vacancies. We use candidates and vacancies as network nodes and designate a bi-directional link between them based on the candidate interviewing for the vacancy. After using node2vec, the embeddings are used to construct a validation dataset with a ranked order, which will help validate new recommender systems.

Keywords: AI, machine learning, NLP, recruiting

Procedia PDF Downloads 84
2797 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding

Authors: Wenya Shu, Ilinca Stanciulescu

Abstract:

Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.

Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding

Procedia PDF Downloads 129
2796 Frequency Domain Decomposition, Stochastic Subspace Identification and Continuous Wavelet Transform for Operational Modal Analysis of Three Story Steel Frame

Authors: Ardalan Sabamehr, Ashutosh Bagchi

Abstract:

Recently, Structural Health Monitoring (SHM) based on the vibration of structures has attracted the attention of researchers in different fields such as: civil, aeronautical and mechanical engineering. Operational Modal Analysis (OMA) have been developed to identify modal properties of infrastructure such as bridge, building and so on. Frequency Domain Decomposition (FDD), Stochastic Subspace Identification (SSI) and Continuous Wavelet Transform (CWT) are the three most common methods in output only modal identification. FDD, SSI, and CWT operate based on the frequency domain, time domain, and time-frequency plane respectively. So, FDD and SSI are not able to display time and frequency at the same time. By the way, FDD and SSI have some difficulties in a noisy environment and finding the closed modes. CWT technique which is currently developed works on time-frequency plane and a reasonable performance in such condition. The other advantage of wavelet transform rather than other current techniques is that it can be applied for the non-stationary signal as well. The aim of this paper is to compare three most common modal identification techniques to find modal properties (such as natural frequency, mode shape, and damping ratio) of three story steel frame which was built in Concordia University Lab by use of ambient vibration. The frame has made of Galvanized steel with 60 cm length, 27 cm width and 133 cm height with no brace along the long span and short space. Three uniaxial wired accelerations (MicroStarin with 100mv/g accuracy) have been attached to the middle of each floor and gateway receives the data and send to the PC by use of Node Commander Software. The real-time monitoring has been performed for 20 seconds with 512 Hz sampling rate. The test is repeated for 5 times in each direction by hand shaking and impact hammer. CWT is able to detect instantaneous frequency by used of ridge detection method. In this paper, partial derivative ridge detection technique has been applied to the local maxima of time-frequency plane to detect the instantaneous frequency. The extracted result from all three methods have been compared, and it demonstrated that CWT has the better performance in term of its accuracy in noisy environment. The modal parameters such as natural frequency, damping ratio and mode shapes are identified from all three methods.

Keywords: ambient vibration, frequency domain decomposition, stochastic subspace identification, continuous wavelet transform

Procedia PDF Downloads 296
2795 An Architecture Based on Capsule Networks for the Identification of Handwritten Signature Forgery

Authors: Luisa Mesquita Oliveira Ribeiro, Alexei Manso Correa Machado

Abstract:

Handwritten signature is a unique form for recognizing an individual, used to discern documents, carry out investigations in the criminal, legal, banking areas and other applications. Signature verification is based on large amounts of biometric data, as they are simple and easy to acquire, among other characteristics. Given this scenario, signature forgery is a worldwide recurring problem and fast and precise techniques are needed to prevent crimes of this nature from occurring. This article carried out a study on the efficiency of the Capsule Network in analyzing and recognizing signatures. The chosen architecture achieved an accuracy of 98.11% and 80.15% for the CEDAR and GPDS databases, respectively.

Keywords: biometrics, deep learning, handwriting, signature forgery

Procedia PDF Downloads 83
2794 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger

Authors: Hany Elsaid Fawaz Abdallah

Abstract:

This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.

Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations

Procedia PDF Downloads 87
2793 Comparison between FEM Simulation and Experiment of Temperature Rise in Power Transformer Inner Steel Plate

Authors: Byung hyun Bae

Abstract:

In power transformer, leakage magnetic flux generate temperature rise of inner steel plate. Sometimes, this temperature rise can be serious problem. If temperature of steel plate is over critical point, harmful gas will be generated in the tank. And this gas can be a reason of fire, explosion and life decrease. So, temperature rise forecasting of steel plate is very important at the design stage of power transformer. To improve accuracy of forecasting of temperature rise, comparison between simulation and experiment achieved in this paper.

Keywords: power transformer, steel plate, temperature rise, experiment, simulation

Procedia PDF Downloads 495
2792 Performance Analysis of the Precise Point Positioning Data Online Processing Service and Using for Monitoring Plate Tectonic of Thailand

Authors: Nateepat Srivarom, Weng Jingnong, Serm Chinnarat

Abstract:

Precise Point Positioning (PPP) technique is use to improve accuracy by using precise satellite orbit and clock correction data, but this technique is complicated methods and high costs. Currently, there are several online processing service providers which offer simplified calculation. In the first part of this research, we compare the efficiency and precision of four software. There are three popular online processing service providers: Australian Online GPS Processing Service (AUSPOS), CSRS-Precise Point Positioning and CenterPoint RTX post processing by Trimble and 1 offline software, RTKLIB, which collected data from 10 the International GNSS Service (IGS) stations for 10 days. The results indicated that AUSPOS has the least distance root mean square (DRMS) value of 0.0029 which is good enough to be calculated for monitoring the movement of tectonic plates. The second, we use AUSPOS to process the data of geodetic network of Thailand. In December 26, 2004, the earthquake occurred a 9.3 MW at the north of Sumatra that highly affected all nearby countries, including Thailand. Earthquake effects have led to errors of the coordinate system of Thailand. The Royal Thai Survey Department (RTSD) is primarily responsible for monitoring of the crustal movement of the country. The difference of the geodetic network movement is not the same network and relatively large. This result is needed for survey to continue to improve GPS coordinates system in every year. Therefore, in this research we chose the AUSPOS to calculate the magnitude and direction of movement, to improve coordinates adjustment of the geodetic network consisting of 19 pins in Thailand during October 2013 to November 2017. Finally, results are displayed on the simulation map by using the ArcMap program with the Inverse Distance Weighting (IDW) method. The pin with the maximum movement is pin no. 3239 (Tak) in the northern part of Thailand. This pin moved in the south-western direction to 11.04 cm. Meanwhile, the directional movement of the other pins in the south gradually changed from south-west to south-east, i.e., in the direction noticed before the earthquake. The magnitude of the movement is in the range of 4 - 7 cm, implying small impact of the earthquake. However, the GPS network should be continuously surveyed in order to secure accuracy of the geodetic network of Thailand.

Keywords: precise point positioning, online processing service, geodetic network, inverse distance weighting

Procedia PDF Downloads 189
2791 Models to Estimate Monthly Mean Daily Global Solar Radiation on a Horizontal Surface in Alexandria

Authors: Ahmed R. Abdelaziz, Zaki M. I. Osha

Abstract:

Solar radiation data are of great significance for solar energy system design. This study aims at developing and calibrating new empirical models for estimating monthly mean daily global solar radiation on a horizontal surface in Alexandria, Egypt. Day length hours, sun height, day number, and declination angle calculated data are used for this purpose. A comparison between measured and calculated values of solar radiation is carried out. It is shown that all the proposed correlations are able to predict the global solar radiation with excellent accuracy in Alexandria.

Keywords: solar energy, global solar radiation, model, regression coefficient

Procedia PDF Downloads 405
2790 Part of Speech Tagging Using Statistical Approach for Nepali Text

Authors: Archit Yajnik

Abstract:

Part of Speech Tagging has always been a challenging task in the era of Natural Language Processing. This article presents POS tagging for Nepali text using Hidden Markov Model and Viterbi algorithm. From the Nepali text, annotated corpus training and testing data set are randomly separated. Both methods are employed on the data sets. Viterbi algorithm is found to be computationally faster and accurate as compared to HMM. The accuracy of 95.43% is achieved using Viterbi algorithm. Error analysis where the mismatches took place is elaborately discussed.

Keywords: hidden markov model, natural language processing, POS tagging, viterbi algorithm

Procedia PDF Downloads 329
2789 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing

Authors: Tolulope Aremu

Abstract:

The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.

Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods

Procedia PDF Downloads 20