Search results for: measurement accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6148

Search results for: measurement accuracy

1108 Polygenetic Iron Mineralization in the Baba-Ali and Galali Deposits, Further Evidences from Stable (S, O, H) Isotope Data, NW Hamedan, Iran

Authors: Ghodratollah Rostami Paydar

Abstract:

The Baba-Ali and Galali iron deposits are located in northwest Hamedan and the Iranian Sanandaj-Sirjan geological structural zone. The host rocks of these deposits are metavolcanosedimentary successions of Songhor stratigraphic series with permo-trriassic age. Field investigation, ore geometry, textures and structures and paragenetic sequence of minerals, all indicate that the ore minerals are crystallized in four stages: primary volcanosedimentary stage, secondary regional metamorphism with formation of ductile shear zones, contact metamorphism and metasomatism stage and the finally late hydrothermal mineralization within uplift and exposure. Totally 29 samples of sulfide, oxide-silicate and carbonate minerals of iron orees and gangue has been purified for stable isotope analysis. The isotope ratio data assure that occurrence of dynamothermal metamorphism in these areas typically involves a lengthy period of time, which results in a tendency toward isotopic homogenization specifically in O and H stable isotopes and showing the role of metamorphic waters in mineralization process. Measurement of δ34S (CDT) in first generation of pyrite is higher than another ones, so it confirms the volcanogenic origin of primary iron mineralization. δ13C data measurements in Galali carbonate country rocks show a marine origin. δ18O in magnetite and skarn forming silicates, δ18O and δ13C in limestone and skarn calcite and δ34S in sulphides are all consistent with the interaction of a magmatic-equilibrated fluid with Galali limestone, and a dominantly magmatic source for S. All these data imply skarn formation and mineralisation in a magmatic-hydrothermal system that maintained high salinity to relatively late stages resulting in the formation of the regional Na metasomatic alteration halo. Late stage hydrothermal quartz-calcite veinlets are important for gold mineralization, but the economic evaluation is required to detailed geochemical studies.

Keywords: iron, polygenetic, stable isotope, BabaAli, Galali

Procedia PDF Downloads 307
1107 A Machine Learning-Based Model to Screen Antituberculosis Compound Targeted against LprG Lipoprotein of Mycobacterium tuberculosis

Authors: Syed Asif Hassan, Syed Atif Hassan

Abstract:

Multidrug-resistant Tuberculosis (MDR-TB) is an infection caused by the resistant strains of Mycobacterium tuberculosis that do not respond either to isoniazid or rifampicin, which are the most important anti-TB drugs. The increase in the occurrence of a drug-resistance strain of MTB calls for an intensive search of novel target-based therapeutics. In this context LprG (Rv1411c) a lipoprotein from MTB plays a pivotal role in the immune evasion of Mtb leading to survival and propagation of the bacterium within the host cell. Therefore, a machine learning method will be developed for generating a computational model that could predict for a potential anti LprG activity of the novel antituberculosis compound. The present study will utilize dataset from PubChem database maintained by National Center for Biotechnology Information (NCBI). The dataset involves compounds screened against MTB were categorized as active and inactive based upon PubChem activity score. PowerMV, a molecular descriptor generator, and visualization tool will be used to generate the 2D molecular descriptors for the actives and inactive compounds present in the dataset. The 2D molecular descriptors generated from PowerMV will be used as features. We feed these features into three different classifiers, namely, random forest, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model based on the accuracy of predicting novel antituberculosis compound with an anti LprG activity. Additionally, the efficacy of predicted active compounds will be screened using SMARTS filter to choose molecule with drug-like features.

Keywords: antituberculosis drug, classifier, machine learning, molecular descriptors, prediction

Procedia PDF Downloads 396
1106 Development of a Regression Based Model to Predict Subjective Perception of Squeak and Rattle Noise

Authors: Ramkumar R., Gaurav Shinde, Pratik Shroff, Sachin Kumar Jain, Nagesh Walke

Abstract:

Advancements in electric vehicles have significantly reduced the powertrain noise and moving components of vehicles. As a result, in-cab noises have become more noticeable to passengers inside the car. To ensure a comfortable ride for drivers and other passengers, it has become crucial to eliminate undesirable component noises during the development phase. Standard practices are followed to identify the severity of noises based on subjective ratings, but it can be a tedious process to identify the severity of each development sample and make changes to reduce it. Additionally, the severity rating can vary from jury to jury, making it challenging to arrive at a definitive conclusion. To address this, an automotive component was identified to evaluate squeak and rattle noise issue. Physical tests were carried out for random and sine excitation profiles. Aim was to subjectively assess the noise using jury rating method and objectively evaluate the same by measuring the noise. Suitable jury evaluation method was selected for the said activity, and recorded sounds were replayed for jury rating. Objective data sound quality metrics viz., loudness, sharpness, roughness, fluctuation strength and overall Sound Pressure Level (SPL) were measured. Based on this, correlation co-efficients was established to identify the most relevant sound quality metrics that are contributing to particular identified noise issue. Regression analysis was then performed to establish the correlation between subjective and objective data. Mathematical model was prepared using artificial intelligence and machine learning algorithm. The developed model was able to predict the subjective rating with good accuracy.

Keywords: BSR, noise, correlation, regression

Procedia PDF Downloads 85
1105 Additive Weibull Model Using Warranty Claim and Finite Element Analysis Fatigue Analysis

Authors: Kanchan Mondal, Dasharath Koulage, Dattatray Manerikar, Asmita Ghate

Abstract:

This paper presents an additive reliability model using warranty data and Finite Element Analysis (FEA) data. Warranty data for any product gives insight to its underlying issues. This is often used by Reliability Engineers to build prediction model to forecast failure rate of parts. But there is one major limitation in using warranty data for prediction. Warranty periods constitute only a small fraction of total lifetime of a product, most of the time it covers only the infant mortality and useful life zone of a bathtub curve. Predicting with warranty data alone in these cases is not generally provide results with desired accuracy. Failure rate of a mechanical part is driven by random issues initially and wear-out or usage related issues at later stages of the lifetime. For better predictability of failure rate, one need to explore the failure rate behavior at wear out zone of a bathtub curve. Due to cost and time constraints, it is not always possible to test samples till failure, but FEA-Fatigue analysis can provide the failure rate behavior of a part much beyond warranty period in a quicker time and at lesser cost. In this work, the authors proposed an Additive Weibull Model, which make use of both warranty and FEA fatigue analysis data for predicting failure rates. It involves modeling of two data sets of a part, one with existing warranty claims and other with fatigue life data. Hazard rate base Weibull estimation has been used for the modeling the warranty data whereas S-N curved based Weibull parameter estimation is used for FEA data. Two separate Weibull models’ parameters are estimated and combined to form the proposed Additive Weibull Model for prediction.

Keywords: bathtub curve, fatigue, FEA, reliability, warranty, Weibull

Procedia PDF Downloads 78
1104 Food Insecurity Assessment, Consumption Pattern and Implications of Integrated Food Security Phase Classification: Evidence from Sudan

Authors: Ahmed A. A. Fadol, Guangji Tong, Wlaa Mohamed

Abstract:

This paper provides a comprehensive analysis of food insecurity in Sudan, focusing on consumption patterns and their implications, employing the Integrated Food Security Phase Classification (IPC) assessment framework. Years of conflict and economic instability have driven large segments of the population in Sudan into crisis levels of acute food insecurity according to the (IPC). A substantial number of people are estimated to currently face emergency conditions, with an additional sizeable portion categorized under less severe but still extreme hunger levels. In this study, we explore the multifaceted nature of food insecurity in Sudan, considering its historical, political, economic, and social dimensions. An analysis of consumption patterns and trends was conducted, taking into account cultural influences, dietary shifts, and demographic changes. Furthermore, we employ logistic regression and random forest analysis to identify significant independent variables influencing food security status in Sudan. Random forest clearly outperforms logistic regression in terms of area under curve (AUC), accuracy, precision and recall. Forward projections of the IPC for Sudan estimate that 15 million individuals are anticipated to face Crisis level (IPC Phase 3) or worse acute food insecurity conditions between October 2023 and February 2024. Of this, 60% are concentrated in Greater Darfur, Greater Kordofan, and Khartoum State, with Greater Darfur alone representing 29% of this total. These findings emphasize the urgent need for both short-term humanitarian aid and long-term strategies to address Sudan's deepening food insecurity crisis.

Keywords: food insecurity, consumption patterns, logistic regression, random forest analysis

Procedia PDF Downloads 79
1103 Investigations of Bergy Bits and Ship Interactions in Extreme Waves Using Smoothed Particle Hydrodynamics

Authors: Mohammed Islam, Jungyong Wang, Dong Cheol Seo

Abstract:

The Smoothed Particle Hydrodynamics (SPH) method is a novel, meshless, and Lagrangian technique based numerical method that has shown promises to accurately predict the hydrodynamics of water and structure interactions in violent flow conditions. The main goal of this study is to build confidence on the versatility of the Smoothed Particle Hydrodynamics (SPH) based tool, to use it as a complementary tool to the physical model testing capabilities and support research need for the performance evaluation of ships and offshore platforms exposed to an extreme and harsh environment. In the current endeavor, an open-sourced SPH-based tool was used and validated for modeling and predictions of the hydrodynamic interactions of a 6-DOF ship and bergy bits. The study involved the modeling of a modern generic drillship and simplified bergy bits in floating and towing scenarios and in regular and irregular wave conditions. The predictions were validated using the model-scale measurements on a moored ship towed at multiple oblique angles approaching a floating bergy bit in waves. Overall, this study results in a thorough comparison between the model scale measurements and the prediction outcomes from the SPH tool for performance and accuracy. The SPH predicted ship motions and forces were primarily within ±5% of the measurements. The velocity and pressure distribution and wave characteristics over the free surface depicts realistic interactions of the wave, ship, and the bergy bit. This work identifies and presents several challenges in preparing the input file, particularly while defining the mass properties of complex geometry, the computational requirements, and the post-processing of the outcomes.

Keywords: SPH, ship and bergy bit, hydrodynamic interactions, model validation, physical model testing

Procedia PDF Downloads 135
1102 Comparison of Different Hydrograph Routing Techniques in XPSTORM Modelling Software: A Case Study

Authors: Fatema Akram, Mohammad Golam Rasul, Mohammad Masud Kamal Khan, Md. Sharif Imam Ibne Amir

Abstract:

A variety of routing techniques are available to develop surface runoff hydrographs from rainfall. The selection of runoff routing method is very vital as it is directly related to the type of watershed and the required degree of accuracy. There are different modelling softwares available to explore the rainfall-runoff process in urban areas. XPSTORM, a link-node based, integrated storm-water modelling software, has been used in this study for developing surface runoff hydrograph for a Golf course area located in Rockhampton in Central Queensland in Australia. Four commonly used methods, namely SWMM runoff, Kinematic wave, Laurenson, and Time-Area are employed to generate runoff hydrograph for design storm of this study area. In runoff mode of XPSTORM, the rainfall, infiltration, evaporation and depression storage for sub-catchments were simulated and the runoff from the sub-catchment to collection node was calculated. The simulation results are presented, discussed and compared. The total surface runoff generated by SWMM runoff, Kinematic wave and Time-Area methods are found to be reasonably close, which indicates any of these methods can be used for developing runoff hydrograph of the study area. Laurenson method produces a comparatively less amount of surface runoff, however, it creates highest peak of surface runoff among all which may be suitable for hilly region. Although the Laurenson hydrograph technique is widely acceptable surface runoff routing technique in Queensland (Australia), extensive investigation is recommended with detailed topographic and hydrologic data in order to assess its suitability for use in the case study area.

Keywords: ARI, design storm, IFD, rainfall temporal pattern, routing techniques, surface runoff, XPSTORM

Procedia PDF Downloads 458
1101 Analysis of Travel Behavior Patterns of Frequent Passengers after the Section Shutdown of Urban Rail Transit - Taking the Huaqiao Section of Shanghai Metro Line 11 Shutdown During the COVID-19 Epidemic as an Example

Authors: Hongyun Li, Zhibin Jiang

Abstract:

The travel of passengers in the urban rail transit network is influenced by changes in network structure and operational status, and the response of individual travel preferences to these changes also varies. Firstly, the influence of the suspension of urban rail transit line sections on passenger travel along the line is analyzed. Secondly, passenger travel trajectories containing multi-dimensional semantics are described based on network UD data. Next, passenger panel data based on spatio-temporal sequences is constructed to achieve frequent passenger clustering. Then, the Graph Convolutional Network (GCN) is used to model and identify the changes in travel modes of different types of frequent passengers. Finally, taking Shanghai Metro Line 11 as an example, the travel behavior patterns of frequent passengers after the Huaqiao section shutdown during the COVID-19 epidemic are analyzed. The results showed that after the section shutdown, most passengers would transfer to the nearest Anting station for boarding, while some passengers would transfer to other stations for boarding or cancel their travels directly. Among the passengers who transferred to Anting station for boarding, most of passengers maintained the original normalized travel mode, a small number of passengers waited for a few days before transferring to Anting station for boarding, and only a few number of passengers stopped traveling at Anting station or transferred to other stations after a few days of boarding on Anting station. The results can provide a basis for understanding urban rail transit passenger travel patterns and improving the accuracy of passenger flow prediction in abnormal operation scenarios.

Keywords: urban rail transit, section shutdown, frequent passenger, travel behavior pattern

Procedia PDF Downloads 88
1100 The Impact of Bim Technology on the Whole Process Cost Management of Civil Engineering Projects in Kenya

Authors: Nsimbe Allan

Abstract:

The study examines the impact of Building Information Modeling (BIM) on the cost management of engineering projects, focusing specifically on the Mombasa Port Area Development Project. The objective of this research venture is to determine the mechanisms through which Building Information Modeling (BIM) facilitates stakeholder collaboration, reduces construction-related expenses, and enhances the precision of cost estimation. Furthermore, the study investigates barriers to execution, assesses the impact on the project's transparency, and suggests approaches to maximize resource utilization. The study, selected for its practical significance and intricate nature, conducted a Systematic Literature Review (SLR) using credible databases, including ScienceDirect and IEEE Xplore. To constitute the diverse sample, 69 individuals, including project managers, cost estimators, and BIM administrators, were selected via stratified random sampling. The data were obtained using a mixed-methods approach, which prioritized ethical considerations. SPSS and Microsoft Excel were applied to the analysis. The research emphasizes the crucial role that project managers, architects, and engineers play in the decision-making process (47% of respondents). Furthermore, a significant improvement in cost estimation accuracy was reported by 70% of the participants. It was found that the implementation of BIM resulted in enhanced project visibility, which in turn optimized resource allocation and facilitated the process of budgeting. In brief, the study highlights the positive impacts of Building Information Modeling (BIM) on collaborative decision-making and cost estimation, addresses challenges related to implementation, and provides solutions for the efficient assimilation and understanding of BIM principles.

Keywords: cost management, resource utilization, stakeholder collaboration, project transparency

Procedia PDF Downloads 75
1099 Societal Resilience Assessment in the Context of Critical Infrastructure Protection

Authors: Hannah Rosenqvist, Fanny Guay

Abstract:

Critical infrastructure protection has been an important topic for several years. Programmes such as the European Programme for Critical Infrastructure Protection (EPCIP), Critical Infrastructure Warning Information Network (CIWIN) and the European Reference Network for Critical Infrastructure Protection (ENR-CIP) have been the pillars to the work done since 2006. However, measuring critical infrastructure resilience has not been an easy task. This has to do with the fact that the concept of resilience has several definitions and is applied in different domains such as engineering and social sciences. Since June 2015, the EU project IMPROVER has been focusing on developing a methodology for implementing a combination of societal, organizational and technological resilience concepts, in the hope to increase critical infrastructure resilience. For this paper, we performed research on how to include societal resilience as a form of measurement of the context of critical infrastructure resilience. Because one of the main purposes of critical infrastructure (CI) is to deliver services to the society, we believe that societal resilience is an important factor that should be considered when assessing the overall CI resilience. We found that existing methods for CI resilience assessment focus mainly on technical aspects and therefore that is was necessary to develop a resilience model that take social factors into account. The model developed within the project IMPROVER aims to include the community’s expectations of infrastructure operators as well as information sharing with the public and planning processes. By considering such aspects, the IMPROVER framework not only helps operators to increase the resilience of their infrastructures on the technical or organizational side, but aims to strengthen community resilience as a whole. This will further be achieved by taking interdependencies between critical infrastructures into consideration. The knowledge gained during this project will enrich current European policies and practices for improved disaster risk management. The framework for societal resilience analysis is based on three dimensions for societal resilience; coping capacity, adaptive capacity and transformative capacity which are capacities that have been recognized throughout a widespread literature review in the field. A set of indicators have been defined that describe a community’s maturity within these resilience dimensions. Further, the indicators are categorized into six community assets that need to be accessible and utilized in such a way that they allow responding to changes and unforeseen circumstances. We conclude that the societal resilience model developed within the project IMPROVER can give a good indication of the level of societal resilience to critical infrastructure operators.

Keywords: community resilience, critical infrastructure protection, critical infrastructure resilience, societal resilience

Procedia PDF Downloads 239
1098 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 328
1097 Effects of Magnetic Field on 4H-SiC P-N Junctions

Authors: Khimmatali Nomozovich Juraev

Abstract:

Silicon carbide is one of the promising materials with potential applications in electronic devices using high power, high frequency and high electric field. Currently, silicon carbide is used to manufacture high power and frequency diodes, transistors, radiation detectors, light emitting diodes (LEDs) and other functional devices. In this work, the effects of magnetic field on p-n junctions based on 4H-SiC were experimentally studied. As a research material, monocrystalline silicon carbide wafers (Cree Research, Inc., USA) with relatively few growth defects grown by physical vapor transport (PVT) method were used: Nd dislocations 104 cm², Nm micropipes ~ 10–10² cm-², thickness ~ 300-600 μm, surface ~ 0.25 cm², resistivity ~ 3.6–20 Ωcm, the concentration of background impurities Nd − Na ~ (0.5–1.0)×1017cm-³. The initial parameters of the samples were determined on a Hall Effect Measurement System HMS-7000 (Ecopia) measuring device. Diffusing Ni nickel atoms were covered to the silicon surface of silicon carbide in a Universal Vacuum Post device at a vacuum of 10-⁵ -10-⁶ Torr by thermal sputtering and kept at a temperature of 600-650°C for 30 minutes. Then Ni atoms were diffused into the silicon carbide 4H-SiC sample at a temperature of 1150-1300°C by low temperature diffusion method in an air atmosphere, and the effects of the magnetic field on the I-V characteristics of the samples were studied. I-V characteristics of silicon carbide 4H-SiC p-n junction sample were measured in the magnetic field and in the absence of a magnetic field. The measurements were carried out under conditions where the magnitude of the magnetic field induction vector was 0.5 T. In the state, the direction of the current flowing through the diode is perpendicular to the direction of the magnetic field. From the obtained results, it can be seen that the magnetic field significantly affects the I-V characteristics of the p-n junction in the magnetic field when it is measured in the forward direction. Under the influence of the magnetic field, the change of the magnetic resistance of the sample of silicon carbide 4H-SiC p-n junction was determined. It was found that changing the magnetic field poles increases the direct forward current of the p-n junction or decreases it when the field direction changes. These unique electrical properties of the 4H-SiC p-n junction sample of silicon carbide, that is, the change of the sample's electrical properties in a magnetic field, makes it possible to fabricate magnetic field sensing devices based on silicon carbide to use at harsh environments in future. So far, the productions of silicon carbide magnetic detectors are not available in the industry.

Keywords: 4H-SiC, diffusion Ni, effects of magnetic field, I-V characteristics

Procedia PDF Downloads 98
1096 Evaluation of Environmental Management System Implementation of Construction Projects in Turkey

Authors: Aydemir Akyürek, Osman Nuri Ağdağ

Abstract:

Construction industry is in a rapid development for many years around the world and especially in Turkey. In the last three years sector has 10% growth and provides significant support on Turkey’s national economy. Many construction projects are on-going at urban and rural areas of Turkey which have substantial environmental impacts. Environmental impacts during construction phase are quite diversified and widespread. Environmental impacts of construction industry cannot be inspected properly in all cases and negative impacts may occur frequently in many projects in Turkey. In this study, implementation of ISO 14001 Environmental Management System (EMS) in construction plants is evaluated. In the beginning stage quality management systems generally reviewed and ISO 14001 EMS is selected for implementation. Standard requirements are examined first and implementation of every standard requirement is elaborated for the selected construction plant in the following stage. Key issues and common problems, gained benefits by execution of this type of international EMS standard are examined. As can be seen in sample projects, construction projects are being completed very fast and contractors are working in a highly competitive environment with low profit ratios in our country and mostly qualified work force cannot be accessible. Addition to this there are deficits on waste handling and environmental infrastructure. Besides construction companies which have substantial investments on EMSs can be faced with difficulties on competitiveness in domestic market, however professional Turkish contractors which implementing managements systems in larger scale at international projects are gaining successful results. Also the concept of ‘construction project management’ which is being implemented in successful projects worldwide cannot be implemented except larger projects in Turkey. In case of nonexistence of main management system (quality) implementation of EMSs cannot be managed. Despite all constraints, EMSs that will be implemented in this industry with commitment of top managements and demand of customers will be an enabling, facilitating tool to determine environmental aspects and impacts of construction sites, will provide higher compliance levels for environmental legislation, to establish best available methods for operational control on waste management, chemicals management etc. and to plan monitoring and measurement, to prioritize environmental aspects for investment schedules and waste management.

Keywords: environmental management system, construction projects, ISO 14001, quality

Procedia PDF Downloads 365
1095 Challenges of eradicating neglected tropical diseases

Authors: Marziye Hadian, Alireza Jabbari

Abstract:

Background: Each year, tropical diseases affect large numbers of tropical or subtropical populations and give rise to irreparable financial and human damage. Among these diseases, some are known as Neglected Tropical Disease (NTD) that may cause unusual dangers; however, they have not been appropriately accounted for. Taking into account the priority of eradication of the disease, this study explored the causes of failure to eradicate neglected tropical diseases. Method: This study was a systematized review that was conducted in January 2021 on the articles related to neglected tropical diseases on databases of Web of Science, PubMed, Scopus, Science Direct, Ovid, Pro-Quest, and Google Scholar. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines as well as Critical Appraisal Skills Program (CASP) for articles and AACODS (Authority, Accuracy, Coverage, Objectivity, Date, Significance) for grey literature (provides five criteria for judging the quality of grey information) were integrated. Finding: The challenges in controlling and eradicating neglected tropical diseases in four general themes are as follows: shortcomings in disease management policies and programs, environmental challenges, executive challenges in policy disease and research field and 36 sub-themes. Conclusion: To achieve the goals of eradicating forgotten tropical diseases, it seems indispensable to free up financial, human and research resources, proper management of health infrastructure, attention to migrants and refugees, clear targeting, prioritization appropriate to local conditions and special attention to political and social developments. Reducing the number of diseases should free up resources for the management of neglected tropical diseases prone to epidemics as dengue, chikungunya and leishmaniasis. For the purpose of global support, targeting should be accurate.

Keywords: neglected tropical disease, NTD, preventive, eradication

Procedia PDF Downloads 135
1094 A Framework on Data and Remote Sensing for Humanitarian Logistics

Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini

Abstract:

Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.

Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making

Procedia PDF Downloads 388
1093 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine

Procedia PDF Downloads 181
1092 Document-level Sentiment Analysis: An Exploratory Case Study of Low-resource Language Urdu

Authors: Ammarah Irum, Muhammad Ali Tahir

Abstract:

Document-level sentiment analysis in Urdu is a challenging Natural Language Processing (NLP) task due to the difficulty of working with lengthy texts in a language with constrained resources. Deep learning models, which are complex neural network architectures, are well-suited to text-based applications in addition to data formats like audio, image, and video. To investigate the potential of deep learning for Urdu sentiment analysis, we implemented five different deep learning models, including Bidirectional Long Short Term Memory (BiLSTM), Convolutional Neural Network (CNN), Convolutional Neural Network with Bidirectional Long Short Term Memory (CNN-BiLSTM), and Bidirectional Encoder Representation from Transformer (BERT). In this study, we developed a hybrid deep learning model called BiLSTM-Single Layer Multi Filter Convolutional Neural Network (BiLSTM-SLMFCNN) by fusing BiLSTM and CNN architecture. The proposed and baseline techniques are applied on Urdu Customer Support data set and IMDB Urdu movie review data set by using pre-trained Urdu word embedding that are suitable for sentiment analysis at the document level. Results of these techniques are evaluated and our proposed model outperforms all other deep learning techniques for Urdu sentiment analysis. BiLSTM-SLMFCNN outperformed the baseline deep learning models and achieved 83%, 79%, 83% and 94% accuracy on small, medium and large sized IMDB Urdu movie review data set and Urdu Customer Support data set respectively.

Keywords: urdu sentiment analysis, deep learning, natural language processing, opinion mining, low-resource language

Procedia PDF Downloads 75
1091 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association

Authors: Jacky Liu

Abstract:

This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.

Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation

Procedia PDF Downloads 110
1090 Pneumoperitoneum Creation Assisted with Optical Coherence Tomography and Automatic Identification

Authors: Eric Yi-Hsiu Huang, Meng-Chun Kao, Wen-Chuan Kuo

Abstract:

For every laparoscopic surgery, a safe pneumoperitoneumcreation (gaining access to the peritoneal cavity) is the first and essential step. However, closed pneumoperitoneum is usually obtained by blind insertion of a Veress needle into the peritoneal cavity, which may carry potential risks suchas bowel and vascular injury.Until now, there remains no definite measure to visually confirm the position of the needle tip inside the peritoneal cavity. Therefore, this study established an image-guided Veress needle method by combining a fiber probe with optical coherence tomography (OCT). An algorithm was also proposed for determining the exact location of the needle tip through the acquisition of OCT images. Our method not only generates a series of “live” two-dimensional (2D) images during the needle puncture toward the peritoneal cavity but also can eliminate operator variation in image judgment, thus improving peritoneal access safety. This study was approved by the Ethics Committee of Taipei Veterans General Hospital (Taipei VGH IACUC 2020-144). A total of 2400 in vivo OCT images, independent of each other, were acquired from experiments of forty peritoneal punctures on two piglets. Characteristic OCT image patterns could be observed during the puncturing process. The ROC curve demonstrates the discrimination capability of these quantitative image features of the classifier, showing the accuracy of the classifier for determining the inside vs. outside of the peritoneal was 98% (AUC=0.98). In summary, the present study demonstrates the ability of the combination of our proposed automatic identification method and OCT imaging for automatically and objectively identifying the location of the needle tip. OCT images translate the blind closed technique of peritoneal access into a visualized procedure, thus improving peritoneal access safety.

Keywords: pneumoperitoneum, optical coherence tomography, automatic identification, veress needle

Procedia PDF Downloads 136
1089 A Novel Method for Face Detection

Authors: H. Abas Nejad, A. R. Teymoori

Abstract:

Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.

Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model

Procedia PDF Downloads 343
1088 Measuring Elemental Sulfur in Late Manually-Treated Grape Juice in Relation to Polyfunctional Mercaptan Formation in Sauvignon Blanc Wines

Authors: Bahareh Sarmadi, Paul A. Kilmartin, Leandro D. Araújo, Brandt P. Bastow

Abstract:

Aim: Sauvignon blanc is the most substantial variety cultivated in almost 62% of all producing vineyards of New Zealand. The popularity of New Zealand Sauvignon blanc is due to its unique taste. It is the most famous wine characterized by its aroma profile derived from mercaptans. 3-mercaptohexan-1-ol (3MH) and 3-mercaptohexyl acetate (3MHA) are two of the most important volatile mercaptans found in Sauvignon blanc wines. “Viticultural” and “Enological” factors such as machine-harvesting, the most common harvesting practice used in New Zealand, can be among the reasons for this distinct flavor. Elemental sulfur is commonly sprayed in the fields to protect berries against powdery mildew. Although it is not the only source of sulfur, this practice creates a source of elemental sulfur that can be transferred into the must and eventually into wines. Despite the clear effects of residual elemental sulfur present in the must on the quality and aroma of the final wines, its measurement before harvest or fermentation is not a regular practice in the wineries. This can be due to the lack of accessible and applicable methods for the equipment at most commercial wineries. This study aims to establish a relationship between the number and frequency of elemental sulfur applications and the concentration of polyfunctional mercaptans in the final wines. Methods: An apparatus was designed to reduce elemental sulfur to sulfide, then an ion-selective electrode to measure sulfide concentration. During harvest 2022, we explored a wider range of residual elemental sulfur levels than what typically applies in the vineyards. This has been done through later manual elemental sulfur applications in the vineyard. Additional sulfur applications were made 20, 10 and 5 days prior to harvesting the treated grapes, covering long and short pre-harvest intervals (PHI). The grapes were processed into juice and fermented into wine; then, they were analyzed to find the correlation between polyfunctional mercaptans concentrations in the wines and residual elemental sulfur in the juice samples. Results: The research showed that higher 3MH/3MHA was formed when elemental sulfur was applied more frequent in the vineyards and supported the proposed pathway in which elemental sulfur is a source of 3MH formation in wines.

Keywords: sauvignon blanc, elemental sulfur, polyfunctional mercaptans, varietal thiols

Procedia PDF Downloads 113
1087 Test Procedures for Assessing the Peel Strength and Cleavage Resistance of Adhesively Bonded Joints with Elastic Adhesives under Detrimental Service Conditions

Authors: Johannes Barlang

Abstract:

Adhesive bonding plays a pivotal role in various industrial applications, ranging from automotive manufacturing to aerospace engineering. The peel strength of adhesives, a critical parameter reflecting the ability of an adhesive to withstand external forces, is crucial for ensuring the integrity and durability of bonded joints. This study provides a synopsis of the methodologies, influencing factors, and significance of peel testing in the evaluation of adhesive performance. Peel testing involves the measurement of the force required to separate two bonded substrates under controlled conditions. This study systematically reviews the different testing techniques commonly applied in peel testing, including the widely used 180-degree peel test and the T-peel test. Emphasis is placed on the importance of selecting an appropriate testing method based on the specific characteristics of the adhesive and the application requirements. The influencing factors on peel strength are multifaceted, encompassing adhesive properties, substrate characteristics, environmental conditions, and test parameters. Through an in-depth analysis, this study explores how factors such as adhesive formulation, surface preparation, temperature, and peel rate can significantly impact the peel strength of adhesively bonded joints. Understanding these factors is essential for optimizing adhesive selection and application processes in real-world scenarios. Furthermore, the study highlights the role of peel testing in quality control and assurance, aiding manufacturers in maintaining consistent adhesive performance and ensuring the reliability of bonded structures. The correlation between peel strength and long-term durability is discussed, shedding light on the predictive capabilities of peel testing in assessing the service life of adhesive bonds. In conclusion, this study underscores the significance of peel testing as a fundamental tool for characterizing adhesive performance. By delving into testing methodologies, influencing factors, and practical implications, this study contributes to the broader understanding of adhesive behavior and fosters advancements in adhesive technology across diverse industrial sectors.

Keywords: adhesively bonded joints, cleavage resistance, elastic adhesives, peel strength

Procedia PDF Downloads 100
1086 Formation of the Investment Portfolio of Intangible Assets with a Wide Pairwise Comparison Matrix Application

Authors: Gulnara Galeeva

Abstract:

The Analytic Hierarchy Process is widely used in the economic and financial studies, including the formation of investment portfolios. In this study, a generalized method of obtaining a vector of priorities for the case with separate pairwise comparisons of the expert opinion being presented as a set of several equal evaluations on a ratio scale is examined. The author claims that this method allows solving an important and up-to-date problem of excluding vagueness and ambiguity of the expert opinion in the decision making theory. The study describes the authentic wide pairwise comparison matrix. Its application in the formation of the efficient investment portfolio of intangible assets of a small business enterprise with limited funding is considered. The proposed method has been successfully approbated on the practical example of a functioning dental clinic. The result of the study confirms that the wide pairwise comparison matrix can be used as a simple and reliable method for forming the enterprise investment policy. Moreover, a comparison between the method based on the wide pairwise comparison matrix and the classical analytic hierarchy process was conducted. The results of the comparative analysis confirm the correctness of the method based on the wide matrix. The application of a wide pairwise comparison matrix also allows to widely use the statistical methods of experimental data processing for obtaining the vector of priorities. A new method is available for simple users. Its application gives about the same accuracy result as that of the classical hierarchy process. Financial directors of small and medium business enterprises get an opportunity to solve the problem of companies’ investments without resorting to services of analytical agencies specializing in such studies.

Keywords: analytic hierarchy process, decision processes, investment portfolio, intangible assets

Procedia PDF Downloads 273
1085 Flammability and Smoke Toxicity of Rainscreen Façades

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Four façade systems were tested using a reduced height BS 8414-2 (5 m) test rig. An L-shaped masonry test wall was clad with three types of insulation and an aluminum composite panel with a non-combustible filling (meeting Euroclass A2). A large (3 MW) wooden crib was ignited in a recess at the base of the L, and the fire was allowed to burn for 30 minutes. Air velocity measurements and gas samples were taken from the main ventilation duct and also a small additional ventilation duct, like those in an apartment bathroom or kitchen. This provided a direct route of travel for smoke from the building façade to a theoretical room using a similar design to many high-rise buildings where the vent is connected to (approximately) 30 m³ rooms. The times to incapacitation and lethality of the effluent were calculated for both the main exhaust vent and for a vent connected to a theoretical 30 m³ room. The rainscreen façade systems tested were the common combinations seen in many tower blocks across the UK. Three tests using ACM A2 with Stonewool, Phenolic foam, and Polyisocyanurate (PIR) foam. A fourth test was conducted with PIR and ACM-PE (polyethylene core). Measurements in the main exhaust duct were representative of the effluent from the burning wood crib. FEDs showed incapacitation could occur up to 30 times quicker with combustible insulation than non-combustible insulation, with lethal gas concentrations accumulating up to 2.7 times faster than other combinations. The PE-cored ACM/PIR combination produced a ferocious fire, resulting in the termination of the test after 13.5 minutes for safety reasons. Occupants of the theoretical room in the PIR/ACM A2 test reached a FED of 1 after 22 minutes; for PF/ACM A2, this took 25 minutes, and for stone wool, a lethal dose measurement of 0.6 was reached at the end of the 30-minute test. In conclusion, when measuring smoke toxicity in the exhaust duct, there is little difference between smoke toxicity measurements between façade systems. Toxicity measured in the main exhaust is largely a result of the wood crib used to ignite the façade system. The addition of a vent allowed smoke toxicity to be quantified in the cavity of the façade, providing a realistic way of measuring the toxicity of smoke that could enter an apartment from a façade fire.

Keywords: smoke toxicity, large-scale testing, BS8414, FED

Procedia PDF Downloads 64
1084 Towards a Robust Patch Based Multi-View Stereo Technique for Textureless and Occluded 3D Reconstruction

Authors: Ben Haines, Li Bai

Abstract:

Patch based reconstruction methods have been and still are one of the top performing approaches to 3D reconstruction to date. Their local approach to refining the position and orientation of a patch, free of global minimisation and independent of surface smoothness, make patch based methods extremely powerful in recovering fine grained detail of an objects surface. However, patch based approaches still fail to faithfully reconstruct textureless or highly occluded surface regions thus though performing well under lab conditions, deteriorate in industrial or real world situations. They are also computationally expensive. Current patch based methods generate point clouds with holes in texturesless or occluded regions that require expensive energy minimisation techniques to fill and interpolate a high fidelity reconstruction. Such shortcomings hinder the adaptation of the methods for industrial applications where object surfaces are often highly textureless and the speed of reconstruction is an important factor. This paper presents on-going work towards a multi-resolution approach to address the problems, utilizing particle swarm optimisation to reconstruct high fidelity geometry, and increasing robustness to textureless features through an adapted approach to the normalised cross correlation. The work also aims to speed up the reconstruction using advances in GPU technologies and remove the need for costly initialization and expansion. Through the combination of these enhancements, it is the intention of this work to create denser patch clouds even in textureless regions within a reasonable time. Initial results show the potential of such an approach to construct denser point clouds with a comparable accuracy to that of the current top-performing algorithms.

Keywords: 3D reconstruction, multiview stereo, particle swarm optimisation, photo consistency

Procedia PDF Downloads 207
1083 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 159
1082 Shifting to Electronic Operative Notes in Plastic surgery

Authors: Samar Mousa, Galini Mavromatidou, Rebecca Shirley

Abstract:

Surgeons carry out numerous operations in the busy burns and plastic surgery department daily. Writing an accurate operation note with all the essential information is crucial for communication not only within the plastics team but also to the multi-disciplinary team looking after the patient, including other specialties, nurses and GPs. The Royal college of surgeons of England, in its guidelines of good surgical practice, mentioned that the surgeon should ensure that there are clear (preferably typed) operative notes for every procedure. The notes should accompany the patient into recovery and to the ward and should give sufficient detail to enable continuity of care by another doctor. The notes should include the Date and time, Elective/emergency procedure, Names of the operating surgeon and assistant, Name of the theatre anesthetist, Operative procedure carried out, Incision, Operative diagnosis, Operative findings, Any problems/complications, Any extra procedure performed and the reason why it was performed, Details of tissue removed, added or altered, Identification of any prosthesis used, including the serial numbers of prostheses and other implanted materials, Details of closure technique, Anticipated blood loss, Antibiotic prophylaxis (where applicable), DVT prophylaxis (where applicable), Detailed postoperative care instructions and Signature. Fourteen random days were chosen in December 2021 to assess the accuracy of operative notes and post-operative care. A total of 163 operative notes were examined. The average completion rates in all domains were 85.4%. An electronic operative note template was designed to cover all domains mentioned in the Royal College of surgeons' good surgical practice. It is kept in the hospital drive for all surgeons to use.

Keywords: operative notes, plastic surgery, documentation, electronic

Procedia PDF Downloads 82
1081 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 132
1080 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 166
1079 Gnss Aided Photogrammetry for Digital Mapping

Authors: Muhammad Usman Akram

Abstract:

This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.

Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry

Procedia PDF Downloads 37