Search results for: empathic accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3745

Search results for: empathic accuracy

1135 Multi-Temporal Urban Land Cover Mapping Using Spectral Indices

Authors: Mst Ilme Faridatul, Bo Wu

Abstract:

Multi-temporal urban land cover mapping is of paramount importance for monitoring urban sprawl and managing the ecological environment. For diversified urban activities, it is challenging to map land covers in a complex urban environment. Spectral indices have proved to be effective for mapping urban land covers. To improve multi-temporal urban land cover classification and mapping, we evaluate the performance of three spectral indices, e.g. modified normalized difference bare-land index (MNDBI), tasseled cap water and vegetation index (TCWVI) and shadow index (ShDI). The MNDBI is developed to evaluate its performance of enhancing urban impervious areas by separating bare lands. A tasseled cap index, TCWVI is developed to evaluate its competence to detect vegetation and water simultaneously. The ShDI is developed to maximize the spectral difference between shadows of skyscrapers and water and enhance water detection. First, this paper presents a comparative analysis of three spectral indices using Landsat Enhanced Thematic Mapper (ETM), Thematic Mapper (TM) and Operational Land Imager (OLI) data. Second, optimized thresholds of the spectral indices are imputed to classify land covers, and finally, their performance of enhancing multi-temporal urban land cover mapping is assessed. The results indicate that the spectral indices are competent to enhance multi-temporal urban land cover mapping and achieves an overall classification accuracy of 93-96%.

Keywords: land cover, mapping, multi-temporal, spectral indices

Procedia PDF Downloads 153
1134 Challenges and Insights by Electrical Characterization of Large Area Graphene Layers

Authors: Marcus Klein, Martina GrießBach, Richard Kupke

Abstract:

The current advances in the research and manufacturing of large area graphene layers are promising towards the introduction of this exciting material in the display industry and other applications that benefit from excellent electrical and optical characteristics. New production technologies in the fabrication of flexible displays, touch screens or printed electronics apply graphene layers on non-metal substrates and bring new challenges to the required metrology. Traditional measurement concepts of layer thickness, sheet resistance, and layer uniformity, are difficult to apply to graphene production processes and are often harmful to the product layer. New non-contact sensor concepts are required to adapt to the challenges and even the foreseeable inline production of large area graphene. Dedicated non-contact measurement sensors are a pioneering method to leverage these issues in a large variety of applications, while significantly lowering the costs of development and process setup. Transferred and printed graphene layers can be characterized with high accuracy in a huge measurement range using a very high resolution. Large area graphene mappings are applied for process optimization and for efficient quality control for transfer, doping, annealing and stacking processes. Examples of doped, defected and excellent Graphene are presented as quality images and implications for manufacturers are explained.

Keywords: graphene, doping and defect testing, non-contact sheet resistance measurement, inline metrology

Procedia PDF Downloads 307
1133 Prevalence of Anxiety and Depression: A Descriptive Cross-Sectional Study among Individuals with Substance-Related Disorders in Argentina

Authors: Badino Manuel, Farias María Alejandra

Abstract:

Anxiety and depression are considered the main mental health issues found in people with substance-related disorders. Furthermore, substance-related disorders, anxiety-related and depressive disorders are among the leading causes of disability and are associated with increased mortality. The co-occurrence of substance-related disorders and these mental health conditions affect the accuracy in diagnosis, treatment plan, and recovery process. The aim is to describe the prevalence of anxiety and depression in patients with substance-related disorders in a mental health service in Córdoba, Argentina. A descriptive cross-sectional study was conducted among patients with substance-related disorders (N=305). Anxiety and depression were assessed using the Patient Health Questionnaire-4 (PHQ-4) during the period from December 2021 to March 2022. For a total of 305 participants, 71,8% were male, 25,6% female and 2,6% non-binary. As regards marital status, 51,5% were single, 21,6% as a couple, 5,9% married, 15,4% separated and 5,6% divorced. In relation to education status, 26,2% finished university, 56,1% high school, 16,4% only primary school and 1,3% no formal schooling. Regarding age, 10,8% were young, 84,3% were adults, and 4,9% were elderly. In-person treatment represented 64,6% of service users, and 35,4% were conducted through teleconsultation. 15,7% of service users scored 3 or higher for anxiety, and 32,1% scored 3 or higher for depression in the PHQ-4. 13,1% obtained a score of 3 or higher for both anxiety and depression. It is recommended to identify anxiety and depression among patients with substance-related disorders to improve the quality of diagnosis, treatment, and recovery. It is suggested to apply PHQ-4, PHQ-9 within the protocol of care for these patients.

Keywords: addiction, anxiety, depression, mental health

Procedia PDF Downloads 102
1132 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework

Authors: Nicola Rubino

Abstract:

This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.

Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points

Procedia PDF Downloads 278
1131 A Comparative Analysis of the Indoor Thermal Environment of a Room with and without Transitional Space or Threshold in Traditional Row Houses Adjacent to a Narrow Alley 'Rupchan Lane' in Old Dhaka, Bangladesh

Authors: Fatema Tasmia, Brishti Majumder, Atiqur Rahman

Abstract:

Attaining appropriate thermal comfort conditions in a place where the climate is hot and humid can be perplexing. Especially, when it resides at a congested place like old Dhaka Bangladesh, the provision of giving cross ventilation and building with proper orientation is quite difficult. This paper aims to investigate the indoor thermal environment of a room with and without transitional space or threshold in traditional row houses adjacent to a narrow alley of old Dhaka through field measurements. Transitional spaces are the part of buildings which are used for semi-outdoor household activities, social gathering and it is also proved to provide an indoor thermal effect. The field study was conducted by collecting thermal data (temperature, humidity and airflow) respectively, among the outdoor narrow alley, transitional space and adjacent indoor. This east-west elongated alley has an average width of 2.13 meter (varies from 1.5 to 2.6 meter) holding row houses on both sides. Among different aspects of thermal environment, the study of this paper is based on the analysis of temperature of corresponding cases. Other aspects and their variables were considered as constant (especially material) for accuracy and avoidance of confusion. This study focuses on the outcome that can ultimately contribute to the configuration of row houses with transitional spaces and in its relation to the adjacent outdoor space while achieving thermal comfort.

Keywords: alley, Old-Dhaka, row houses, temperature, thermal comfort, threshold, transitional space

Procedia PDF Downloads 187
1130 Valence and Arousal-Based Sentiment Analysis: A Comparative Study

Authors: Usama Shahid, Muhammad Zunnurain Hussain

Abstract:

This research paper presents a comprehensive analysis of a sentiment analysis approach that employs valence and arousal as its foundational pillars, in comparison to traditional techniques. Sentiment analysis is an indispensable task in natural language processing that involves the extraction of opinions and emotions from textual data. The valence and arousal dimensions, representing the intensity and positivity/negativity of emotions, respectively, enable the creation of four quadrants, each representing a specific emotional state. The study seeks to determine the impact of utilizing these quadrants to identify distinct emotional states on the accuracy and efficiency of sentiment analysis, in comparison to traditional techniques. The results reveal that the valence and arousal-based approach outperforms other approaches, particularly in identifying nuanced emotions that may be missed by conventional methods. The study's findings are crucial for applications such as social media monitoring and market research, where the accurate classification of emotions and opinions is paramount. Overall, this research highlights the potential of using valence and arousal as a framework for sentiment analysis and offers invaluable insights into the benefits of incorporating specific types of emotions into the analysis. These findings have significant implications for researchers and practitioners in the field of natural language processing, as they provide a basis for the development of more accurate and effective sentiment analysis tools.

Keywords: sentiment analysis, valence and arousal, emotional states, natural language processing, machine learning, text analysis, sentiment classification, opinion mining

Procedia PDF Downloads 101
1129 Comprehensive Feature Extraction for Optimized Condition Assessment of Fuel Pumps

Authors: Ugochukwu Ejike Akpudo, Jank-Wook Hur

Abstract:

The increasing demand for improved productivity, maintainability, and reliability has prompted rapidly increasing research studies on the emerging condition-based maintenance concept- Prognostics and health management (PHM). Varieties of fuel pumps serve critical functions in several hydraulic systems; hence, their failure can have daunting effects on productivity, safety, etc. The need for condition monitoring and assessment of these pumps cannot be overemphasized, and this has led to the uproar in research studies on standard feature extraction techniques for optimized condition assessment of fuel pumps. By extracting time-based, frequency-based and the more robust time-frequency based features from these vibrational signals, a more comprehensive feature assessment (and selection) can be achieved for a more accurate and reliable condition assessment of these pumps. With the aid of emerging deep classification and regression algorithms like the locally linear embedding (LLE), we propose a method for comprehensive condition assessment of electromagnetic fuel pumps (EMFPs). Results show that the LLE as a comprehensive feature extraction technique yields better feature fusion/dimensionality reduction results for condition assessment of EMFPs against the use of single features. Also, unlike other feature fusion techniques, its capabilities as a fault classification technique were explored, and the results show an acceptable accuracy level using standard performance metrics for evaluation.

Keywords: electromagnetic fuel pumps, comprehensive feature extraction, condition assessment, locally linear embedding, feature fusion

Procedia PDF Downloads 117
1128 Revisiting the Historical Narratives of the Old Churches in Albay, Bikol Region, Philippines

Authors: Ruby Ann L. Ayo

Abstract:

As cultural heritage reflects the historical origin of a certain group of people, it reveals their customs, traits, beliefs, practices and even values they hold on for years. One of the tangible examples of cultural heritage is the physical structures including the old churches. The study looked-into the existing historical narratives of the century Old Catholic churches in the Province of Albay, Bikol Region, Philippines: NuestraSeñora de Salvacion in Joroan, Tiwi, Albay; the Our Lady of the Gate in Daraga, Albay; the San Juan de Bautista in Tabaco City and the St. John the Baptist in Camalig, Albay. The historical narratives were analysed in terms of validity and reliability of the secondary documents with reference to the elements of history revealing consistency and adequacy of historical facts. The contents were examined using a modified Checklist of Historical Documents. The historical narratives were likewise submitted to the content expert for validation as regards historical authenticity and accuracy. The contents of the narratives were scrutinized according to the following codes: (1.1) the Patron Saints;(1.2) factors that paved to their constructions; (1.3) the people responsible for their constructions; (1.4) the misconceptions about their constructions; and (1.5) their contributions to Bikol heritage. Based on the codes, themes were identified as: (2.1) Marian Devotees and Christ-centered Patron Saints; (2.2) geographical, socio-political and cultural factors; (2.3) church and government officials; (2.4) misconceptions on the dates of constructions and original sites; and (2.5) popular pilgrim sites and well-admired architectural designs.

Keywords: historical narratives, old churches, cultural heritage, historical validity and reliability, elements of history

Procedia PDF Downloads 294
1127 A Boundary-Fitted Nested Grid Model for Modeling Tsunami Propagation of 2004 Indonesian Tsunami along Southern Thailand

Authors: Fazlul Karim, Esa Al-Islam

Abstract:

Many problems in oceanography and environmental sciences require the solution of shallow water equations on physical domains having curvilinear coastlines and abrupt changes of ocean depth near the shore. Finite-difference technique for the shallow water equations representing the boundary as stair step may give inaccurate results near the coastline where results are of greatest interest for various applications. This suggests the use of methods which are capable of incorporating the irregular boundary in coastal belts. At the same time, large velocity gradient is expected near the beach and islands as water depth vary abruptly near the coast. A nested numerical scheme with fine resolution is the best resort to enhance the numerical accuracy with the least grid numbers for the region of interests where the velocity changes rapidly and which is unnecessary for the away of the region. This paper describes the development of a boundary fitted nested grid (BFNG) model to compute tsunami propagation of 2004 Indonesian tsunami in Southern Thailand coastal waters. In this paper, we develop a numerical model employing the shallow water nested model and an orthogonal boundary fitted grid to investigate the tsunami impact on the Southern Thailand due to the Indonesian tsunami of 2004. Comparisons of water surface elevation obtained from numerical simulations and field measurements are made.

Keywords: Indonesian tsunami of 2004, Boundary-fitted nested grid model, Southern Thailand, finite difference method

Procedia PDF Downloads 441
1126 Participation in IAEA Proficiency Test to Analyse Cobalt, Strontium and Caesium in Seawater Using Direct Counting and Radiochemical Techniques

Authors: S. Visetpotjanakit, C. Khrautongkieo

Abstract:

Radiation monitoring in the environment and foodstuffs is one of the main responsibilities of Office of Atoms for Peace (OAP) as the nuclear regulatory body of Thailand. The main goal of the OAP is to assure the safety of the Thai people and environment from any radiological incidents. Various radioanalytical methods have been developed to monitor radiation and radionuclides in the environmental and foodstuff samples. To validate our analytical performance, several proficiency test exercises from the International Atomic Energy Agency (IAEA) have been performed. Here, the results of a proficiency test exercise referred to as the Proficiency Test for Tritium, Cobalt, Strontium and Caesium Isotopes in Seawater 2017 (IAEA-RML-2017-01) are presented. All radionuclides excepting ³H were analysed using various radioanalytical methods, i.e. direct gamma-ray counting for determining ⁶⁰Co, ¹³⁴Cs and ¹³⁷Cs and developed radiochemical techniques for analysing ¹³⁴Cs, ¹³⁷Cs using AMP pre-concentration technique and 90Sr using di-(2-ethylhexyl) phosphoric acid (HDEHP) liquid extraction technique. The analysis results were submitted to IAEA. All results passed IAEA criteria, i.e. accuracy, precision and trueness and obtained ‘Accepted’ statuses. These confirm the data quality from the OAP environmental radiation laboratory to monitor radiation in the environment.

Keywords: international atomic energy agency, proficiency test, radiation monitoring, seawater

Procedia PDF Downloads 171
1125 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: data fusion, Gaussian process regression, signal denoise, temporal extrapolation

Procedia PDF Downloads 135
1124 On the Development of Medical Additive Manufacturing in Egypt

Authors: Khalid Abdelghany

Abstract:

Additive Manufacturing (AM) is the manufacturing technology that is used to fabricate fast products direct from CAD models in very short time and with minimum operation steps. Jointly with the advancement in medical computer modeling, AM proved to be a very efficient tool to help physicians, orthopedic surgeons and dentists design and fabricate patient-tailored surgical guides, templates and customized implants from the patient’s CT / MRI images. AM jointly with computer-assisted designing/computer-assisted manufacturing (CAD/CAM) technology have enabled medical practitioners to tailor physical models in a patient-and purpose-specific fashion and helped to design and manufacture of templates, appliances and devices with a high range of accuracy using biocompatible materials. In developing countries, there are some technical and financial limitations of implementing such advanced tools as an essential portion of medical applications. CMRDI institute in Egypt has been working in the field of Medical Additive Manufacturing since 2003 and has assisted in the recovery of hundreds of poor patients using these advanced tools. This paper focuses on the surgical and dental use of 3D printing technology in Egypt as a developing country. The presented case studies have been designed and processed using the software tools and additive manufacturing machines in CMRDI through cooperative engineering and medical works. Results showed that the implementation of the additive manufacturing tools in developed countries is successful and could be economical comparing to long treatment plans.

Keywords: additive manufacturing, dental and orthopeadic stents, patient specific surgical tools, titanium implants

Procedia PDF Downloads 315
1123 Research on the United Navigation Mechanism of Land, Sea and Air Targets under Multi-Sources Information Fusion

Authors: Rui Liu, Klaus Greve

Abstract:

The navigation information is a kind of dynamic geographic information, and the navigation information system is a kind of special geographic information system. At present, there are many researches on the application of centralized management and cross-integration application of basic geographic information. However, the idea of information integration and sharing is not deeply applied into the research of navigation information service. And the imperfection of navigation target coordination and navigation information sharing mechanism under certain navigation tasks has greatly affected the reliability and scientificity of navigation service such as path planning. Considering this, the project intends to study the multi-source information fusion and multi-objective united navigation information interaction mechanism: first of all, investigate the actual needs of navigation users in different areas, and establish the preliminary navigation information classification and importance level model; and then analyze the characteristics of the remote sensing and GIS vector data, and design the fusion algorithm from the aspect of improving the positioning accuracy and extracting the navigation environment data. At last, the project intends to analyze the feature of navigation information of the land, sea and air navigation targets, and design the united navigation data standard and navigation information sharing model under certain navigation tasks, and establish a test navigation system for united navigation simulation experiment. The aim of this study is to explore the theory of united navigation service and optimize the navigation information service model, which will lay the theory and technology foundation for the united navigation of land, sea and air targets.

Keywords: information fusion, united navigation, dynamic path planning, navigation information visualization

Procedia PDF Downloads 288
1122 The Application and Relevance of Costing Techniques in Service-Oriented Business Organizations a Review of the Activity-Based Costing (ABC) Technique

Authors: Udeh Nneka Evelyn

Abstract:

The shortcoming of traditional costing system in terms of validity, accuracy, consistency, and Relevance increased the need for modern management accounting system. Activity –Based Costing (ABC) can be used as a modern tool for planning, Control and decision making for management. Past studies on ABC system have focused on manufacturing firms thereby making the studies on service firms scanty to some extent. This paper reviewed the application and relevance of activity-based costing technique in service oriented business organizations by employing a qualitative research method which relied heavily on literature review of past and current relevant articles focusing on ABC. Findings suggest that ABC is not only appropriate for use in a manufacturing environment; it is also most appropriate for service organizations such as financial institutions, the healthcare industry and government organization. In fact, some banking and financial institutions have been applying the concept for years under other names. One of them is unit costing, which is used to calculate the cost of banking services by determining the cost and consumption of each unit of output of functions required to deliver the service. ABC in very basic terms may provide very good payback for businesses. Some of the benefits that relate directly to the financial services industry are: identification the most profitable customers: more accurate product and service pricing: increase product profitability: Well organized process costs.

Keywords: business, costing, organizations, planning, techniques

Procedia PDF Downloads 240
1121 Parallel Self Organizing Neural Network Based Estimation of Archie’s Parameters and Water Saturation in Sandstone Reservoir

Authors: G. M. Hamada, A. A. Al-Gathe, A. M. Al-Khudafi

Abstract:

Determination of water saturation in sandstone is a vital question to determine the initial oil or gas in place in reservoir rocks. Water saturation determination using electrical measurements is mainly on Archie’s formula. Consequently accuracy of Archie’s formula parameters affects water saturation values rigorously. Determination of Archie’s parameters a, m, and n is proceeded by three conventional techniques, Core Archie-Parameter Estimation (CAPE) and 3-D. This work introduces the hybrid system of parallel self-organizing neural network (PSONN) targeting accepted values of Archie’s parameters and, consequently, reliable water saturation values. This work focuses on Archie’s parameters determination techniques; conventional technique, CAPE technique, and 3-D technique, and then the calculation of water saturation using current. Using the same data, a hybrid parallel self-organizing neural network (PSONN) algorithm is used to estimate Archie’s parameters and predict water saturation. Results have shown that estimated Arche’s parameters m, a, and n are highly accepted with statistical analysis, indicating that the PSONN model has a lower statistical error and higher correlation coefficient. This study was conducted using a high number of measurement points for 144 core plugs from a sandstone reservoir. PSONN algorithm can provide reliable water saturation values, and it can supplement or even replace the conventional techniques to determine Archie’s parameters and thereby calculate water saturation profiles.

Keywords: water saturation, Archie’s parameters, artificial intelligence, PSONN, sandstone reservoir

Procedia PDF Downloads 128
1120 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials

Authors: Rajesh Kumar G

Abstract:

A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.

Keywords: adaptive design, simulation, borrowing data, bayesian model

Procedia PDF Downloads 76
1119 Satellite Technology Usage for Greenhouse Gas Emissions Monitoring and Verification: Policy Considerations for an International System

Authors: Timiebi Aganaba-Jeanty

Abstract:

Accurate and transparent monitoring, reporting and verification of Greenhouse Gas (GHG) emissions and removals is a requirement of the United Nations Framework Convention on Climate Change (UNFCCC). Several countries are obligated to prepare and submit an annual national greenhouse gas inventory covering anthropogenic emissions by sources and removals by sinks, subject to a review conducted by an international team of experts. However, the process is not without flaws. The self-reporting varies enormously in thoroughness, frequency and accuracy including inconsistency in the way such reporting occurs. The world’s space agencies are calling for a new generation of satellites that would be precise enough to map greenhouse gas emissions from individual nations. The plan is delicate politically because the global system could verify or cast doubt on emission reports from the member states of the UNFCCC. A level playing field is required and an idea that an international system should be perceived as an instrument to facilitate fairness and equality rather than to spy on or punish. This change of perspective is required to get buy in for an international verification system. The research proposes the viability of a satellite system that provides independent access to data regarding greenhouse gas emissions and the policy and governance implications of its potential use as a monitoring and verification system for the Paris Agreement. It assesses the foundations of the reporting monitoring and verification system as proposed in Paris and analyzes this in light of a proposed satellite system. The use of remote sensing technology has been debated for verification purposes and as evidence in courts but this is not without controversy. Lessons can be learned from its use in this context.

Keywords: greenhouse gas emissions, reporting, monitoring and verification, satellite, UNFCCC

Procedia PDF Downloads 286
1118 Improving Cheon-Kim-Kim-Song (CKKS) Performance with Vector Computation and GPU Acceleration

Authors: Smaran Manchala

Abstract:

Homomorphic Encryption (HE) enables computations on encrypted data without requiring decryption, mitigating data vulnerability during processing. Usable Fully Homomorphic Encryption (FHE) could revolutionize secure data operations across cloud computing, AI training, and healthcare, providing both privacy and functionality, however, the computational inefficiency of schemes like Cheon-Kim-Kim-Song (CKKS) hinders their widespread practical use. This study focuses on optimizing CKKS for faster matrix operations through the implementation of vector computation parallelization and GPU acceleration. The variable effects of vector parallelization on GPUs were explored, recognizing that while parallelization typically accelerates operations, it could introduce overhead that results in slower runtimes, especially in smaller, less computationally demanding operations. To assess performance, two neural network models, MLPN and CNN—were tested on the MNIST dataset using both ARM and x86-64 architectures, with CNN chosen for its higher computational demands. Each test was repeated 1,000 times, and outliers were removed via Z-score analysis to measure the effect of vector parallelization on CKKS performance. Model accuracy was also evaluated under CKKS encryption to ensure optimizations did not compromise results. According to the results of the trail runs, applying vector parallelization had a 2.63X efficiency increase overall with a 1.83X performance increase for x86-64 over ARM architecture. Overall, these results suggest that the application of vector parallelization in tandem with GPU acceleration significantly improves the efficiency of CKKS even while accounting for vector parallelization overhead, providing impact in future zero trust operations.

Keywords: CKKS scheme, runtime efficiency, fully homomorphic encryption (FHE), GPU acceleration, vector parallelization

Procedia PDF Downloads 23
1117 Grain Selection in Spiral Grain Selectors during Casting Single-Crystal Turbine Blades

Authors: M. Javahar, H. B. Dong

Abstract:

Single crystal components manufactured using Ni-base Superalloys are routinely used in the hot sections of aero engines and industrial gas turbines due to their outstanding high temperature strength, toughness and resistance to degradation in corrosive and oxidative environments. To control the quality of the single crystal turbine blades, particular attention has been paid to grain selection, which is used to obtain the single crystal morphology from a plethora of columnar grains. For this purpose, different designs of grain selectors are employed and the most common type is the spiral grain selector. A typical spiral grain selector includes a starter block and a spiral (helix) located above. It has been found that the grains with orientation well aligned to the thermal gradient survive in the starter block by competitive grain growth while the selection of the single crystal grain occurs in the spiral part. In the present study, 2D spiral selectors with different geometries were designed and produced using a state-of-the-art Bridgeman Directional Solidification casting furnace to investigate the competitive growth during grain selection in 2d grain selectors. The principal advantage of using a 2-D selector is to facilitate the wax injection process in investment casting by enabling significant degree of automation. The automation within the process can be derived by producing 2D grain selector wax patterns parts using a split die (metal mold model) coupled with wax injection stage. This will not only produce the part with high accuracy but also at an acceptable production rate.

Keywords: grain selector, single crystal, directional solidification, CMSX-4 superalloys, investment casting

Procedia PDF Downloads 587
1116 Importance of Developing a Decision Support System for Diagnosis of Glaucoma

Authors: Murat Durucu

Abstract:

Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.

Keywords: decision support system, glaucoma, image processing, pattern recognition

Procedia PDF Downloads 302
1115 Feature Analysis of Predictive Maintenance Models

Authors: Zhaoan Wang

Abstract:

Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.

Keywords: automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation

Procedia PDF Downloads 133
1114 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran

Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard

Abstract:

Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.

Keywords: data mining, ischemic stroke, decision tree, Bayesian network

Procedia PDF Downloads 174
1113 Non-Linear Regression Modeling for Composite Distributions

Authors: Mostafa Aminzadeh, Min Deng

Abstract:

Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.

Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions

Procedia PDF Downloads 33
1112 Assessing Effects of an Intervention on Bottle-Weaning and Reducing Daily Milk Intake from Bottles in Toddlers Using Two-Part Random Effects Models

Authors: Yungtai Lo

Abstract:

Two-part random effects models have been used to fit semi-continuous longitudinal data where the response variable has a point mass at 0 and a continuous right-skewed distribution for positive values. We review methods proposed in the literature for analyzing data with excess zeros. A two-part logit-log-normal random effects model, a two-part logit-truncated normal random effects model, a two-part logit-gamma random effects model, and a two-part logit-skew normal random effects model were used to examine effects of a bottle-weaning intervention on reducing bottle use and daily milk intake from bottles in toddlers aged 11 to 13 months in a randomized controlled trial. We show in all four two-part models that the intervention promoted bottle-weaning and reduced daily milk intake from bottles in toddlers drinking from a bottle. We also show that there are no differences in model fit using either the logit link function or the probit link function for modeling the probability of bottle-weaning in all four models. Furthermore, prediction accuracy of the logit or probit link function is not sensitive to the distribution assumption on daily milk intake from bottles in toddlers not off bottles.

Keywords: two-part model, semi-continuous variable, truncated normal, gamma regression, skew normal, Pearson residual, receiver operating characteristic curve

Procedia PDF Downloads 349
1111 Evaluation of Residual Stresses in Human Face as a Function of Growth

Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan

Abstract:

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of living tissues to mechanical loads is necessary for a wide range of developing fields such as prosthetics design or computerassisted surgical interventions. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically, growth is one of the main sources. Extracting body organ’s shapes from medical imaging does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is gravity since an organ grows under its influence from birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. This paper presents an original computational framework based on gradual growth to determine the residual stresses due to growth. To illustrate the method, we apply it to a finite element model of a healthy human face reconstructed from medical images. The distribution of residual stress in facial tissues is computed, which can overcome the effect of gravity and maintain tissues firmness. Our assumption is that tissue wrinkles caused by aging could be a consequence of decreasing residual stress and thus not counteracting gravity. Taking into account these stresses seems therefore extremely important in maxillofacial surgery. It would indeed help surgeons to estimate tissues changes after surgery.

Keywords: finite element method, growth, residual stress, soft tissue

Procedia PDF Downloads 270
1110 Optimization of the Mechanical Performance of Fused Filament Fabrication Parts

Authors: Iván Rivet, Narges Dialami, Miguel Cervera, Michele Chiumenti

Abstract:

Process parameters in Additive Manufacturing (AM) play a critical role in the mechanical performance of the final component. In order to find the input configuration that guarantees the optimal performance of the printed part, the process-performance relationship must be found. Fused Filament Fabrication (FFF) is the selected demonstrative AM technology due to its great popularity in the industrial manufacturing world. A material model that considers the different printing patterns present in a FFF part is used. A voxelized mesh is built from the manufacturing toolpaths described in the G-Code file. An Adaptive Mesh Refinement (AMR) based on the octree strategy is used in order to reduce the complexity of the mesh while maintaining its accuracy. High-fidelity and cost-efficient Finite Element (FE) simulations are performed and the influence of key process parameters in the mechanical performance of the component is analyzed. A robust optimization process based on appropriate failure criteria is developed to find the printing direction that leads to the optimal mechanical performance of the component. The Tsai-Wu failure criterion is implemented due to the orthotropy and heterogeneity constitutive nature of FFF components and because of the differences between the strengths in tension and compression. The optimization loop implements a modified version of an Anomaly Detection (AD) algorithm and uses the computed metrics to obtain the optimal printing direction. The developed methodology is verified with a case study on an industrial demonstrator.

Keywords: additive manufacturing, optimization, printing direction, mechanical performance, voxelization

Procedia PDF Downloads 63
1109 Optimization of Fused Deposition Modeling 3D Printing Process via Preprocess Calibration Routine Using Low-Cost Thermal Sensing

Authors: Raz Flieshman, Adam Michael Altenbuchner, Jörg Krüger

Abstract:

This paper presents an approach to optimizing the Fused Deposition Modeling (FDM) 3D printing process through a preprocess calibration routine of printing parameters. The core of this method involves the use of a low-cost thermal sensor capable of measuring tempera-tures within the range of -20 to 500 degrees Celsius for detailed process observation. The calibration process is conducted by printing a predetermined path while varying the process parameters through machine instructions (g-code). This enables the extraction of critical thermal, dimensional, and surface properties along the printed path. The calibration routine utilizes computer vision models to extract features and metrics from the thermal images, in-cluding temperature distribution, layer adhesion quality, surface roughness, and dimension-al accuracy and consistency. These extracted properties are then analyzed to optimize the process parameters to achieve the desired qualities of the printed material. A significant benefit of this calibration method is its potential to create printing parameter profiles for new polymer and composite materials, thereby enhancing the versatility and application range of FDM 3D printing. The proposed method demonstrates significant potential in enhancing the precision and reliability of FDM 3D printing, making it a valuable contribution to the field of additive manufacturing.

Keywords: FDM 3D printing, preprocess calibration, thermal sensor, process optimization, additive manufacturing, computer vision, material profiles

Procedia PDF Downloads 40
1108 Convolutional Neural Networks-Optimized Text Recognition with Binary Embeddings for Arabic Expiry Date Recognition

Authors: Mohamed Lotfy, Ghada Soliman

Abstract:

Recognizing Arabic dot-matrix digits is a challenging problem due to the unique characteristics of dot-matrix fonts, such as irregular dot spacing and varying dot sizes. This paper presents an approach for recognizing Arabic digits printed in dot matrix format. The proposed model is based on Convolutional Neural Networks (CNN) that take the dot matrix as input and generate embeddings that are rounded to generate binary representations of the digits. The binary embeddings are then used to perform Optical Character Recognition (OCR) on the digit images. To overcome the challenge of the limited availability of dotted Arabic expiration date images, we developed a True Type Font (TTF) for generating synthetic images of Arabic dot-matrix characters. The model was trained on a synthetic dataset of 3287 images and 658 synthetic images for testing, representing realistic expiration dates from 2019 to 2027 in the format of yyyy/mm/dd. Our model achieved an accuracy of 98.94% on the expiry date recognition with Arabic dot matrix format using fewer parameters and less computational resources than traditional CNN-based models. By investigating and presenting our findings comprehensively, we aim to contribute substantially to the field of OCR and pave the way for advancements in Arabic dot-matrix character recognition. Our proposed approach is not limited to Arabic dot matrix digit recognition but can also be extended to text recognition tasks, such as text classification and sentiment analysis.

Keywords: computer vision, pattern recognition, optical character recognition, deep learning

Procedia PDF Downloads 93
1107 Study and Simulation of the Thrust Vectoring in Supersonic Nozzles

Authors: Kbab H, Hamitouche T

Abstract:

In recent years, significant progress has been accomplished in the field of aerospace propulsion and propulsion systems. These developments are associated with efforts to enhance the accuracy of the analysis of aerothermodynamic phenomena in the engine. This applies in particular to the flow in the nozzles used. One of the most remarkable processes in this field is thrust vectoring by means of devices able to orientate the thrust vector and control the deflection of the exit jet in the engine nozzle. In the study proposed, we are interested in the fluid thrust vectoring using a second injection in the nozzle divergence. This fluid injection causes complex phenomena, such as boundary layer separation, which generates a shock wave in the primary jet upstream of the fluid interacting zone (primary jet - secondary jet). This will cause the deviation of the main flow, and therefore of the thrust vector with reference to the axis nozzle. In the modeling of the fluidic thrust vector, various parameters can be used. The Mach number of the primary jet and the injected fluid, the total pressures ratio, the injection rate, the thickness of the upstream boundary layer, the injector position in the divergent part, and the nozzle geometry are decisive factors in this type of phenomenon. The complexity of the latter challenges researchers to understand the physical phenomena of the turbulent boundary layer encountered in supersonic nozzles, as well as the calculation of its thickness and the friction forces induced on the walls. The present study aims to numerically simulate the thrust vectoring by secondary injection using the ANSYS-FLUENT, then to analyze and validate the results and the performances obtained (angle of deflection, efficiency...), which will then be compared with those obtained by other authors.

Keywords: CD Nozzle, TVC, SVC, NPR, CFD, NPR, SPR

Procedia PDF Downloads 133
1106 Fluid-Structure Interaction Study of Fluid Flow past Marine Turbine Blade Designed by Using Blade Element Theory and Momentum Theory

Authors: Abu Afree Andalib, M. Mezbah Uddin, M. Rafiur Rahman, M. Abir Hossain, Rajia Sultana Kamol

Abstract:

This paper deals with the analysis of flow past the marine turbine blade which is designed by using the blade element theory and momentum theory for the purpose of using in the field of renewable energy. The designed blade is analyzed for various parameters using FSI module of Ansys. Computational Fluid Dynamics is used for the study of fluid flow past the blade and other fluidic phenomena such as lift, drag, pressure differentials, energy dissipation in water. Finite Element Analysis (FEA) module of Ansys was used to analyze the structural parameter such as stress and stress density, localization point, deflection, force propagation. Fine mesh is considered in every case for more accuracy in the result according to computational machine power. The relevance of design, search and optimization with respect to complex fluid flow and structural modeling is considered and analyzed. The relevancy of design and optimization with respect to complex fluid for minimum drag force using Ansys Adjoint Solver module is analyzed as well. The graphical comparison of the above-mentioned parameter using CFD and FEA and subsequently FSI technique is illustrated and found the significant conformity between both the results.

Keywords: blade element theory, computational fluid dynamics, finite element analysis, fluid-structure interaction, momentum theory

Procedia PDF Downloads 301