Search results for: accuracy assessment.
8312 An Assessment of Financial Viability and Sustainability of Hydroponics Using Reclaimed Water Using LCA and LCC
Authors: Muhammad Abdullah, Muhammad Atiq Ur Rehman Tariq, Faraz Ul Haq
Abstract:
In developed countries, sustainability measures are widely accepted and acknowledged as crucial for addressing environmental concerns. Hydroponics, a soilless cultivation technique, has emerged as a potentially sustainable solution as it can reduce water consumption, land use, and environmental impacts. However, hydroponics may not be economically viable, especially when using reclaimed water, which may entail additional costs and risks. This study aims to address the critical question of whether hydroponics using reclaimed water can achieve a balance between sustainability and financial viability. Life Cycle Assessment (LCA) and Life Cycle Cost (LCC) will be integrated to assess the potential of hydroponics whether it is environmentally sustainable and economically viable. Life cycle assessment, or LCA, is a methodology for assessing environmental impacts associated with all the stages of the life cycle of a commercial product, process, or service. While Life Cycle Cost (LCC) is an approach that assesses the total cost of an asset over its life cycle, including initial capital costs and maintenance costs. The expected benefits of this study include supporting evidence-based decision-making for policymakers, farmers, and stakeholders involved in agriculture. By quantifying environmental impacts and economic costs, this research will facilitate informed choices regarding the adoption of hydroponics with reclaimed water. It is believed that the outcomes of this research work will help to achieve a sustainable approach to agricultural production, aligning with sustainability goals while considering economic factors by adopting hydroponic technique.Keywords: hydroponic, life cycle assessment, life cycle cost, sustainability
Procedia PDF Downloads 698311 Time Efficient Color Coding for Structured-Light 3D Scanner
Authors: Po-Hao Huang, Pei-Ju Chiang
Abstract:
The structured light 3D scanner is commonly used for measuring the 3D shape of an object. Through projecting designed light patterns on the object, deformed patterns can be obtained and used for the geometric shape reconstruction. At present, Gray code is the most reliable and commonly used light pattern in the structured light 3D scanner. However, the trade-off between scanning efficiency and accuracy is a long-standing and challenging problem. The design of light patterns plays a significant role in the scanning efficiency and accuracy. Thereby, we proposed a novel encoding method integrating color information and Gray-code to improve the scanning efficiency. We will demonstrate that with the proposed method, the scanning time can be reduced to approximate half of the one needed by Gray-code without reduction of precision.Keywords: gray-code, structured light scanner, 3D shape acquisition, 3D reconstruction
Procedia PDF Downloads 4568310 Robustified Asymmetric Logistic Regression Model for Global Fish Stock Assessment
Authors: Osamu Komori, Shinto Eguchi, Hiroshi Okamura, Momoko Ichinokawa
Abstract:
The long time-series data on population assessments are essential for global ecosystem assessment because the temporal change of biomass in such a database reflects the status of global ecosystem properly. However, the available assessment data usually have limited sample sizes and the ratio of populations with low abundance of biomass (collapsed) to those with high abundance (non-collapsed) is highly imbalanced. To allow for the imbalance and uncertainty involved in the ecological data, we propose a binary regression model with mixed effects for inferring ecosystem status through an asymmetric logistic model. In the estimation equation, we observe that the weights for the non-collapsed populations are relatively reduced, which in turn puts more importance on the small number of observations of collapsed populations. Moreover, we extend the asymmetric logistic regression model using propensity score to allow for the sample biases observed in the labeled and unlabeled datasets. It robustified the estimation procedure and improved the model fitting.Keywords: double robust estimation, ecological binary data, mixed effect logistic regression model, propensity score
Procedia PDF Downloads 2648309 Damage Assessment of Current Facades in Turkey throughout the Seismic Actions
Authors: Büşra Elibol, İsmail Sait Soyer, Hamid Farrokh Ghatte
Abstract:
The continuity of the structural and non-structural elements within the envelope of the buildings is one of the fundamental factors in buildings during seismic actions. This investigation aims to make a comparison between Van and İzmir earthquakes in terms of damage assessment of the various facades. A strong earthquake (magnitude 7.2) struck the city of Van in the east of Turkey on 23 October 2011, and similarly, another strong earthquake struck the city of İzmir (magnitude 6.9) in Turkey on 30 October 2020. This paper presents the damage assessment of the current facade systems from multi-story buildings in Van and İzmir, Turkey. This investigation covers the buildings greater than three stories in height, excluding most unreinforced masonry facades. Regarding a building that can have more than one facade system, any of the facade systems are considered individually. Observation of different kinds of damages in the facade is discussed and represented in terms of its performance level throughout the seismic actions. Furthermore, presenting the standard design guidelines (i.e., Turkish seismic design code) is required not only for designers but also for installers of facade systems.Keywords: damage, earthquake, facade, structural element, seismic action
Procedia PDF Downloads 1598308 Forecasting Impacts on Vulnerable Shorelines: Vulnerability Assessment Along the Coastal Zone of Messologi Area - Western Greece
Authors: Evangelos Tsakalos, Maria Kazantzaki, Eleni Filippaki, Yannis Bassiakos
Abstract:
The coastal areas of the Mediterranean have been extensively affected by the transgressive event that followed the Last Glacial Maximum, with many studies conducted regarding the stratigraphic configuration of coastal sediments around the Mediterranean. The coastal zone of the Messologi area, western Greece, consists of low relief beaches containing low cliffs and eroded dunes, a fact which, in combination with the rising sea level and tectonic subsidence of the area, has led to substantial coastal. Coastal vulnerability assessment is a useful means of identifying areas of coastline that are vulnerable to impacts of climate change and coastal processes, highlighting potential problem areas. Commonly, coastal vulnerability assessment takes the form of an ‘index’ that quantifies the relative vulnerability along a coastline. Here we make use of the coastal vulnerability index (CVI) methodology by Thieler and Hammar-Klose, by considering geological features, coastal slope, relative sea-level change, shoreline erosion/accretion rates, and mean significant wave height as well as mean tide range to assess the present-day vulnerability of the coastal zone of Messologi area. In light of this, an impact assessment is performed under three different sea level rise scenarios, and adaptation measures to control climate change events are proposed. This study contributes toward coastal zone management practices in low-lying areas that have little data information, assisting decision-makers in adopting best adaptations options to overcome sea level rise impact on vulnerable areas similar to the coastal zone of Messologi.Keywords: coastal vulnerability index, coastal erosion, sea level rise, GIS
Procedia PDF Downloads 1748307 Terrestrial Laser Scans to Assess Aerial LiDAR Data
Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani
Abstract:
The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy
Procedia PDF Downloads 998306 Enabling Non-invasive Diagnosis of Thyroid Nodules with High Specificity and Sensitivity
Authors: Sai Maniveer Adapa, Sai Guptha Perla, Adithya Reddy P.
Abstract:
Thyroid nodules can often be diagnosed with ultrasound imaging, although differentiating between benign and malignant nodules can be challenging for medical professionals. This work suggests a novel approach to increase the precision of thyroid nodule identification by combining machine learning and deep learning. The new approach first extracts information from the ultrasound pictures using a deep learning method known as a convolutional autoencoder. A support vector machine, a type of machine learning model, is then trained using these features. With an accuracy of 92.52%, the support vector machine can differentiate between benign and malignant nodules. This innovative technique may decrease the need for pointless biopsies and increase the accuracy of thyroid nodule detection.Keywords: thyroid tumor diagnosis, ultrasound images, deep learning, machine learning, convolutional auto-encoder, support vector machine
Procedia PDF Downloads 568305 Quantitative Assessment of Road Infrastructure Health Using High-Resolution Remote Sensing Data
Authors: Wang Zhaoming, Shao Shegang, Chen Xiaorong, Qi Yanan, Tian Lei, Wang Jian
Abstract:
This study conducts a comparative analysis of the spectral curves of asphalt pavements at various aging stages to improve road information extraction from high-resolution remote sensing imagery. By examining the distinguishing capabilities and spectral characteristics, the research aims to establish a pavement information extraction methodology based on China's high-resolution satellite images. The process begins by analyzing the spectral features of asphalt pavements to construct a spectral assessment model suitable for evaluating pavement health. This model is then tested at a national highway traffic testing site in China, validating its effectiveness in distinguishing different pavement aging levels. The study's findings demonstrate that the proposed model can accurately assess road health, offering a valuable tool for road maintenance planning and infrastructure management.Keywords: spectral analysis, asphalt pavement aging, high-resolution remote sensing, pavement health assessment
Procedia PDF Downloads 198304 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis
Authors: Jui-Teng Liao
Abstract:
The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format
Procedia PDF Downloads 868303 Automatic Reporting System for Transcriptome Indel Identification and Annotation Based on Snapshot of Next-Generation Sequencing Reads Alignment
Authors: Shuo Mu, Guangzhi Jiang, Jinsa Chen
Abstract:
The analysis of Indel for RNA sequencing of clinical samples is easily affected by sequencing experiment errors and software selection. In order to improve the efficiency and accuracy of analysis, we developed an automatic reporting system for Indel recognition and annotation based on image snapshot of transcriptome reads alignment. This system includes sequence local-assembly and realignment, target point snapshot, and image-based recognition processes. We integrated high-confidence Indel dataset from several known databases as a training set to improve the accuracy of image processing and added a bioinformatical processing module to annotate and filter Indel artifacts. Subsequently, the system will automatically generate data, including data quality levels and images results report. Sanger sequencing verification of the reference Indel mutation of cell line NA12878 showed that the process can achieve 83% sensitivity and 96% specificity. Analysis of the collected clinical samples showed that the interpretation accuracy of the process was equivalent to that of manual inspection, and the processing efficiency showed a significant improvement. This work shows the feasibility of accurate Indel analysis of clinical next-generation sequencing (NGS) transcriptome. This result may be useful for RNA study for clinical samples with microsatellite instability in immunotherapy in the future.Keywords: automatic reporting, indel, next-generation sequencing, NGS, transcriptome
Procedia PDF Downloads 1908302 A Case Study of Bee Algorithm for Ready Mixed Concrete Problem
Authors: Wuthichai Wongthatsanekorn, Nuntana Matheekrieangkrai
Abstract:
This research proposes Bee Algorithm (BA) to optimize Ready Mixed Concrete (RMC) truck scheduling problem from single batch plant to multiple construction sites. This problem is considered as an NP-hard constrained combinatorial optimization problem. This paper provides the details of the RMC dispatching process and its related constraints. BA was then developed to minimize total waiting time of RMC trucks while satisfying all constraints. The performance of BA is then evaluated on two benchmark problems (3 and 5construction sites) according to previous researchers. The simulation results of BA are compared in term of efficiency and accuracy with Genetic Algorithm (GA) and all problems show that BA approach outperforms GA in term of efficiency and accuracy to obtain optimal solution. Hence, BA approach could be practically implemented to obtain the best schedule.Keywords: bee colony optimization, ready mixed concrete problem, ruck scheduling, multiple construction sites
Procedia PDF Downloads 3848301 Making a Difference in a Crisis: How the 24-Hour Surgical Ambulatory Assessment Unit Transformed Emergency Care during COVID-19
Authors: Bindhiya Thomas, Rehana Hafeez
Abstract:
Background: The Surgical Ambulatory Unit (SAU) also known as the Same Day Emergency Care (SDEC) is an established part of many hospitals providing same day emergency care service to surgical patients who would have otherwise required admission through the A&E. Prior to Covid, the SAU was functioning as a 12-hour service, but during the Covid crisis this service was transformed to a 24 hour functioning Surgical Ambulatory Assessment unit (SAAU). We studied the effects that this change brought about in-patient care in our hospital. Objective: The objective of the study was to assess the impact of a 24-hour Surgical Ambulatory Assessment unit on patient care during the time of Covid, in particular its role in freeing A&E capacity and delivering effective patient care. Methods: We collected two sets of data retrospectively. The first set was collected over a 6-week period when the SAU was functioning at the Princess Royal University Hospital. On March 23rd, 2020, the SAU was transformed into a 24-hour SAAU. Following this transformation, a second set of patient data was collected over a period of 6 weeks. A comparison was made between data collected from when the hospital had a 12-hour Surgical Ambulatory unit and later when it was transformed into a 24-hour facility. Its effects on the change in the number of patients breaching the four hour waiting period and the number of emergency surgical admissions. Results: The 24-hour Surgical Ambulatory Assessment unit brought significant reductions in the number of patients breaching the waiting period of 4 hours in A&E from 44% during the period of the 12-hour Surgical Ambulatory care facility to 0% from when the 24-hour Surgical Ambulatory Assessment Unit was established. A 28% reduction was also seen in the number of surgical patients' admissions from A&E. Conclusions: The 24-hour SAAU was found to have a profound positive impact on emergency care of surgical patients. Especially during the Covid crisis, it played a crucial role in providing not only effective and accessible patient care but also in reducing the A&E workload and admissions. It thus proved to be a strategic tool that helped to deal with the immense workload in emergency care during the Covid crisis and helped free much needed headspace at a time of uncertainty for the A&E to better configure their services. If sustained, the 24-hour SAAU could be relied on to augment the NHS emergency services in the future, especially in the event of another crisis.Keywords: Princess Royal University Hospital, surgical ambulatory assessment unit, surgical ambulatory unit, same day emergency care
Procedia PDF Downloads 1638300 Flow Boiling Heat Transfer at Low Mass and Heat Fluxes: Heat Transfer Coefficient, Flow Pattern Analysis and Correlation Assessment
Authors: Ernest Gyan Bediako, Petra Dancova, Tomas Vit
Abstract:
Flow boiling heat transfer remains an important area of research due to its relevance in thermal management systems and other applications. Despite the enormous work done in the field of flow boiling heat transfer over the years to understand how flow parameters such as mass flux, heat flux, saturation conditions and tube geometries influence the characteristics of flow boiling heat transfer, there are still many contradictions and lack of agreement on the actual mechanisms controlling heat transfer and how flow parameters impact the heat transfer. This work thus seeks to experimentally investigate the heat transfer characteristics and flow patterns at low mass fluxes, low heat fluxes and low saturation pressure conditions which are of less attention in literature but prevalent in refrigeration, air-conditioning and heat pump applications. In this study, flow boiling experiment was conducted for R134a working fluid in a 5 mm internal diameter stainless steel horizontal smooth tube with mass flux ranging from 80- 100 kg/m2 s, heat fluxes ranging from 3.55kW/m2 - 25.23 kW/m2 and saturation pressure of 460 kPa. Vapor quality ranged from 0 to 1. A well-known flow pattern map created by Wojtan et al. was used to predict the flow patterns noticed during the study. The experimental results were correlated with well-known flow boiling heat transfer correlations in literature. The findings show that, heat transfer coefficient was influenced by both mass flux and heat fluxes. However, for an increasing heat flux, nucleate boiling was observed to be the dominant mechanism controlling the heat transfer especially at low vapor quality region. For an increasing mass flux, convective boiling was the dominant mechanism controlling the heat transfer especially in the high vapor quality region. Also, the study observed an unusual high heat transfer coefficient at low vapor qualities which could be due to periodic wetting of the walls of the tube due to slug flow pattern and stratified wavy flow patterns. The flow patterns predicted by Wojtan et al. flow pattern map were mixture of slug and stratified wavy, purely stratified wavy and dry out. Statistical assessment of the experimental data with various well-known correlations from literature showed that, none of the correlations reported in literature could predicted the experimental data with enough accuracy.Keywords: flow boiling, heat transfer coefficient, mass flux, heat flux.
Procedia PDF Downloads 1158299 Neuropsychology of Social Awareness: A Research Study Applied to University Students in Greece
Authors: Argyris Karapetsas, Maria Bampou, Andriani Mitropoulou
Abstract:
The aim of the present work is to study the role of brain function in social awareness processing. Mind controls all the psychosomatic functions. Mind’s functioning enables individual not only to recognize one's own self and propositional attitudes, but also to assign such attitudes to other individuals, and to consider such observed mental states in the elucidation of behavior. Participants and Methods: Twenty (n=20) undergraduate students (mean age 18 years old) were involved in this study. Students participated in a clinical assessment, being conducted in Laboratory of Neuropsychology, at University of Thessaly, in Volos, Greece. Assessment included both electrophysiological (i.e.Event Related Potentials (ERPs) esp.P300 waveform) and neuropsychological tests (Raven's Progressive Matrices (RPM) and Sally-Anne test). Results: Initial assessment’s results confirmed statistically significant differences between the males and females, as well as in score performance to the tests applied. Strong correlations emerged between prefrontal lobe functioning, RPM, Sally-Anne test and P300 latencies. Also, significant dysfunction of mind has been found, regarding its three dimensions (straight, circular and helical). At the end of the assessment, students received consultation and appropriate guidelines in order to improve their intrapersonal and interpersonal skills. Conclusions: Mind and social awareness phenomena play a vital role in human development and may act as determinants of the quality of one’s own life. Meanwhile, brain function is highly correlated with social awareness and it seems that different set of brain structures are involved in social behavior.Keywords: brain activity, emotions, ERP's, social awareness
Procedia PDF Downloads 1918298 Prediction and Reduction of Cracking Issue in Precision Forging of Engine Valves Using Finite Element Method
Authors: Xi Yang, Bulent Chavdar, Alan Vonseggern, Taylan Altan
Abstract:
Fracture in hot precision forging of engine valves was investigated in this paper. The entire valve forging procedure was described and the possible cause of the fracture was proposed. Finite Element simulation was conducted for the forging process, with commercial Finite Element code DEFORMTM. The effects of material properties, the effect of strain rate and temperature were considered in the FE simulation. Two fracture criteria were discussed and compared, based on the accuracy and reliability of the FE simulation results. The selected criterion predicted the fracture location and shows the trend of damage increasing with good accuracy, which matches the experimental observation. Additional modification of the punch shapes was proposed to further reduce the tendency of fracture in forging. Finite Element comparison shows a great potential of such application in the mass production.Keywords: hotforging, engine valve, fracture, tooling
Procedia PDF Downloads 2768297 Remaining Useful Life (RUL) Assessment Using Progressive Bearing Degradation Data and ANN Model
Authors: Amit R. Bhende, G. K. Awari
Abstract:
Remaining useful life (RUL) prediction is one of key technologies to realize prognostics and health management that is being widely applied in many industrial systems to ensure high system availability over their life cycles. The present work proposes a data-driven method of RUL prediction based on multiple health state assessment for rolling element bearings. Bearing degradation data at three different conditions from run to failure is used. A RUL prediction model is separately built in each condition. Feed forward back propagation neural network models are developed for prediction modeling.Keywords: bearing degradation data, remaining useful life (RUL), back propagation, prognosis
Procedia PDF Downloads 4348296 An Ensemble-based Method for Vehicle Color Recognition
Authors: Saeedeh Barzegar Khalilsaraei, Manoocheher Kelarestaghi, Farshad Eshghi
Abstract:
The vehicle color, as a prominent and stable feature, helps to identify a vehicle more accurately. As a result, vehicle color recognition is of great importance in intelligent transportation systems. Unlike conventional methods which use only a single Convolutional Neural Network (CNN) for feature extraction or classification, in this paper, four CNNs, with different architectures well-performing in different classes, are trained to extract various features from the input image. To take advantage of the distinct capability of each network, the multiple outputs are combined using a stack generalization algorithm as an ensemble technique. As a result, the final model performs better than each CNN individually in vehicle color identification. The evaluation results in terms of overall average accuracy and accuracy variance show the proposed method’s outperformance compared to the state-of-the-art rivals.Keywords: Vehicle Color Recognition, Ensemble Algorithm, Stack Generalization, Convolutional Neural Network
Procedia PDF Downloads 818295 Accuracy of Small Field of View CBCT in Determining Endodontic Working Length
Authors: N. L. S. Ahmad, Y. L. Thong, P. Nambiar
Abstract:
An in vitro study was carried out to evaluate the feasibility of small field of view (FOV) cone beam computed tomography (CBCT) in determining endodontic working length. The objectives were to determine the accuracy of CBCT in measuring the estimated preoperative working lengths (EPWL), endodontic working lengths (EWL) and file lengths. Access cavities were prepared in 27 molars. For each root canal, the baseline electronic working length was determined using an EAL (Raypex 5). The teeth were then divided into overextended, non-modified and underextended groups and the lengths were adjusted accordingly. Imaging and measurements were made using the respective software of the RVG (Kodak RVG 6100) and CBCT units (Kodak 9000 3D). Root apices were then shaved and the apical constrictions viewed under magnification to measure the control working lengths. The paired t-test showed a statistically significant difference between CBCT EPWL and control length but the difference was too small to be clinically significant. From the Bland Altman analysis, the CBCT method had the widest range of 95% limits of agreement, reflecting its greater potential of error. In measuring file lengths, RVG had a bigger window of 95% limits of agreement compared to CBCT. Conclusions: (1) The clinically insignificant underestimation of the preoperative working length using small FOV CBCT showed that it is acceptable for use in the estimation of preoperative working length. (2) Small FOV CBCT may be used in working length determination but it is not as accurate as the currently practiced method of using the EAL. (3) It is also more accurate than RVG in measuring file lengths.Keywords: accuracy, CBCT, endodontics, measurement
Procedia PDF Downloads 3078294 Identification of Breast Anomalies Based on Deep Convolutional Neural Networks and K-Nearest Neighbors
Authors: Ayyaz Hussain, Tariq Sadad
Abstract:
Breast cancer (BC) is one of the widespread ailments among females globally. The early prognosis of BC can decrease the mortality rate. Exact findings of benign tumors can avoid unnecessary biopsies and further treatments of patients under investigation. However, due to variations in images, it is a tough job to isolate cancerous cases from normal and benign ones. The machine learning technique is widely employed in the classification of BC pattern and prognosis. In this research, a deep convolution neural network (DCNN) called AlexNet architecture is employed to get more discriminative features from breast tissues. To achieve higher accuracy, K-nearest neighbor (KNN) classifiers are employed as a substitute for the softmax layer in deep learning. The proposed model is tested on a widely used breast image database called MIAS dataset for experimental purposes and achieved 99% accuracy.Keywords: breast cancer, DCNN, KNN, mammography
Procedia PDF Downloads 1358293 Level Set and Morphological Operation Techniques in Application of Dental Image Segmentation
Authors: Abdolvahab Ehsani Rad, Mohd Shafry Mohd Rahim, Alireza Norouzi
Abstract:
Medical image analysis is one of the great effects of computer image processing. There are several processes to analysis the medical images which the segmentation process is one of the challenging and most important step. In this paper the segmentation method proposed in order to segment the dental radiograph images. Thresholding method has been applied to simplify the images and to morphologically open binary image technique performed to eliminate the unnecessary regions on images. Furthermore, horizontal and vertical integral projection techniques used to extract the each individual tooth from radiograph images. Segmentation process has been done by applying the level set method on each extracted images. Nevertheless, the experiments results by 90% accuracy demonstrate that proposed method achieves high accuracy and promising result.Keywords: integral production, level set method, morphological operation, segmentation
Procedia PDF Downloads 3148292 Reinforcement Learning for Classification of Low-Resolution Satellite Images
Authors: Khadija Bouzaachane, El Mahdi El Guarmah
Abstract:
The classification of low-resolution satellite images has been a worthwhile and fertile field that attracts plenty of researchers due to its importance in monitoring geographical areas. It could be used for several purposes such as disaster management, military surveillance, agricultural monitoring. The main objective of this work is to classify efficiently and accurately low-resolution satellite images by using novel technics of deep learning and reinforcement learning. The images include roads, residential areas, industrial areas, rivers, sea lakes, and vegetation. To achieve that goal, we carried out experiments on the sentinel-2 images considering both high accuracy and efficiency classification. Our proposed model achieved a 91% accuracy on the testing dataset besides a good classification for land cover. Focus on the parameter precision; we have obtained 93% for the river, 92% for residential, 97% for residential, 96% for the forest, 87% for annual crop, 84% for herbaceous vegetation, 85% for pasture, 78% highway and 100% for Sea Lake.Keywords: classification, deep learning, reinforcement learning, satellite imagery
Procedia PDF Downloads 2118291 Evaluation of Commercial Back-analysis Package in Condition Assessment of Railways
Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman
Abstract:
Over the years,increased demands on railways, the emergence of high-speed trains and heavy axle loads, ageing, and deterioration of the existing tracks, is imposing costly maintenance actions on the railway sector. The need for developing a fast andcost-efficient non-destructive assessment method for the structural evaluation of railway tracksis therefore critically important. The layer modulus is the main parameter used in the structural design and evaluation of the railway track substructure (foundation). Among many recently developed NDTs, Falling Weight Deflectometer (FWD) test, widely used in pavement evaluation, has shown promising results for railway track substructure monitoring. The surface deflection data collected by FWD are used to estimate the modulus of substructure layers through the back-analysis technique. Although there are different commerciallyavailableback-analysis programs are used for pavement applications, there are onlya limited number of research-based techniques have been so far developed for railway track evaluation. In this paper, the suitability, accuracy, and reliability of the BAKFAAsoftware are investigated. The main rationale for selecting BAKFAA as it has a relatively straightforward user interfacethat is freely available and widely used in highway and airport pavement evaluation. As part of the study, a finite element (FE) model of a railway track section near Leominsterstation, Herefordshire, UK subjected to the FWD test, was developed and validated against available field data. Then, a virtual experimental database (including 218 sets of FWD testing data) was generated using theFE model and employed as the measured database for the BAKFAA software. This database was generated considering various layers’ moduli for each layer of track substructure over a predefined range. The BAKFAA predictions were compared against the cone penetration test (CPT) data (available from literature; conducted near to Leominster station same section as the FWD was performed). The results reveal that BAKFAA overestimatesthe layers’ moduli of each substructure layer. To adjust the BAKFA with the CPT data, this study introduces a correlation model to make the BAKFAA applicable in railway applications.Keywords: back-analysis, bakfaa, railway track substructure, falling weight deflectometer (FWD), cone penetration test (CPT)
Procedia PDF Downloads 1288290 Estimation and Restoration of Ill-Posed Parameters for Underwater Motion Blurred Images
Authors: M. Vimal Raj, S. Sakthivel Murugan
Abstract:
Underwater images degrade their quality due to atmospheric conditions. One of the major problems in an underwater image is motion blur caused by the imaging device or the movement of the object. In order to rectify that in post-imaging, parameters of the blurred image are to be estimated. So, the point spread function is estimated by the properties, using the spectrum of the image. To improve the estimation accuracy of the parameters, Optimized Polynomial Lagrange Interpolation (OPLI) method is implemented after the angle and length measurement of motion-blurred images. Initially, the data were collected from real-time environments in Chennai and processed. The proposed OPLI method shows better accuracy than the existing classical Cepstral, Hough, and Radon transform estimation methods for underwater images.Keywords: image restoration, motion blur, parameter estimation, radon transform, underwater
Procedia PDF Downloads 1738289 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement
Authors: Rajkumar Ghosh
Abstract:
Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.Keywords: earthquake, out-of-sequence thrust, disaster, human life
Procedia PDF Downloads 748288 Re-identification Risk and Mitigation in Federated Learning: Human Activity Recognition Use Case
Authors: Besma Khalfoun
Abstract:
In many current Human Activity Recognition (HAR) applications, users' data is frequently shared and centrally stored by third parties, posing a significant privacy risk. This practice makes these entities attractive targets for extracting sensitive information about users, including their identity, health status, and location, thereby directly violating users' privacy. To tackle the issue of centralized data storage, a relatively recent paradigm known as federated learning has emerged. In this approach, users' raw data remains on their smartphones, where they train the HAR model locally. However, users still share updates of their local models originating from raw data. These updates are vulnerable to several attacks designed to extract sensitive information, such as determining whether a data sample is used in the training process, recovering the training data with inversion attacks, or inferring a specific attribute or property from the training data. In this paper, we first introduce PUR-Attack, a parameter-based user re-identification attack developed for HAR applications within a federated learning setting. It involves associating anonymous model updates (i.e., local models' weights or parameters) with the originating user's identity using background knowledge. PUR-Attack relies on a simple yet effective machine learning classifier and produces promising results. Specifically, we have found that by considering the weights of a given layer in a HAR model, we can uniquely re-identify users with an attack success rate of almost 100%. This result holds when considering a small attack training set and various data splitting strategies in the HAR model training. Thus, it is crucial to investigate protection methods to mitigate this privacy threat. Along this path, we propose SAFER, a privacy-preserving mechanism based on adaptive local differential privacy. Before sharing the model updates with the FL server, SAFER adds the optimal noise based on the re-identification risk assessment. Our approach can achieve a promising tradeoff between privacy, in terms of reducing re-identification risk, and utility, in terms of maintaining acceptable accuracy for the HAR model.Keywords: federated learning, privacy risk assessment, re-identification risk, privacy preserving mechanisms, local differential privacy, human activity recognition
Procedia PDF Downloads 108287 The Role of Cyfra 21-1 in Diagnosing Non Small Cell Lung Cancer (NSCLC)
Authors: H. J. T. Kevin Mozes, Dyah Purnamasari
Abstract:
Background: Lung cancer accounted for the fourth most common cancer in Indonesia. 85% of all lung cancer cases are the Non-Small Cell Lung Cancer (NSCLC). The indistinct signs and symptoms of NSCLC sometimes lead to misdiagnosis. The gold standard assessment for the diagnosis of NSCLC is the histopathological biopsy, which is invasive. Cyfra 21-1 is a tumor marker, which can be found in the intermediate protein structure in the epitel. The accuracy of Cyfra 21-1 in diagnosing NSCLC is not yet known, so this report is made to seek the answer for the question above. Methods: Literature searching is done using online databases. Proquest and Pubmed are online databases being used in this report. Then, literature selection is done by excluding and including based on inclusion criterias and exclusion criterias. The selected literature is then being appraised using the criteria of validity, importance, and validity. Results: From six journals appraised, five of them are valid. Sensitivity value acquired from all five literature is ranging from 50-84.5 %, meanwhile the specificity is 87.8 %-94.4 %. Likelihood the ratio of all appraised literature is ranging from 5.09 -10.54, which categorized to Intermediate High. Conclusion: Serum Cyfra 21-1 is a sensitive and very specific tumor marker for diagnosis of non-small cell lung cancer (NSCLC).Keywords: cyfra 21-1, diagnosis, nonsmall cell lung cancer, NSCLC, tumor marker
Procedia PDF Downloads 2318286 “It Just Feels Risky”: Intuition vs Evidence in Child Sexual Abuse Cases. Proposing an Empirically Derived Risk and Protection Protocol
Authors: Christian Perrin, Nicholas Blagden, Louise Allen, Sarah Impey
Abstract:
Social workers in the UK and professionals globally are faced with a particular challenge when dealing with allegations of child sexual abuse (CSA) in the community. In the absence of a conviction or incontestable evidence, staff can often find themselves unable to take decisive action to remove a child from harm, even though there may be a credible threat to their welfare. Conversely, practitioners may over-calculate risk through fear of being accountable for harm. This is, in part, due to the absence of a structured and evidence-based risk assessment tool which can predict the likelihood of a person committing child sexual abuse. Such assessments are often conducted by forensic professionals who utilise offence-specific data and personal history information to calculate risk. In situations where only allegations underpin a case, this mode of assessment is not viable. There are further ethical issues surrounding the assessment of risk in this area which require expert consideration and sensitive planning. This paper explores this entangled problem extant in the wider call to prevent sexual and child sexual abuse in the community. To this end, 32 qualitative interviews were undertaken with social workers dealing with CSA cases. Results were analysed using thematic analysis and operationalised to formulate a risk and protection protocol for use in case management. This paper reports on the early findings associated with the initial indications of protocol reliability. Implications for further research and practice are discussed.Keywords: sexual offending, child sexual offence, offender rehabilitation, risk assessment, offence prevention
Procedia PDF Downloads 1078285 Reviewing Performance Assessment Frameworks for Urban Sanitation Services in India
Authors: Gaurav Vaidya, N. R. Mandal
Abstract:
UN Summit, 2000 had resolved to provide access to sanitation to whole humanity as part of ‘Millennium Development Goals -2015’. However, more than one third of world’s population still did not have the access to basic sanitation facilities by 2015. Therefore, it will be a gigantic challenge to achieve goal-6 of ‘UN Sustainable Development Goal’ to ensure availability and sustainable management of sanitation for all by the year 2030. Countries attempt to find out own ways of meeting this challenge of providing access to safe sanitation and as part of monitoring the actions have prepared varied types of ‘performance assessment frameworks (PAF)’. India introduced Service Level Benchmarking (SLB) in 2010 to set targets and achieve the goals of NUSP. Further, a method of reviewing performance was introduced as ‘Swachh Sarvekshan’ (Cleanliness Surveys) in 2016 and in 2017 guidelines for the same was revised. This study, as a first step, reviews the documents in use in India with a conclusion that the frameworks adopted are based on target setting, financial allocation and performance in achieving the targets set. However, it does not focus upon sanitation needs holistically i.e., areas and aspects not targeted through projects are not covered in the performance assessment. In this context, as a second step, this study reviews literature available on performance assessment frameworks for urban sanitation in selected other countries and compares the same with that in India. The outcome of the comparative review resulted in identification of unaddressed aspects as well as inadequacy of parameters in Indian context. Thirdly, in an attempt to restructure the performance assessment process and develop an index in urban sanitation, researches done in other urban services such as health and education were studied focusing on methods of measuring under-performance. As a fourth step, a tentative modified framework is suggested with the help of understanding drawn from above for urban sanitation using stages of Urban Sanitation Service Chain Management (SSCM) and modified set of parameters drawn from the literature review in the first and second steps. This paper reviews existing literature on SSCM procedures, Performance Index in sanitation and other urban services and identifies a tentative list of parameters and a framework for measuring under-performance in sanitation services. This may aid in preparation of a Service Delivery Under-performance Index (SDUI) in future.Keywords: assessment, performance, sanitation, services
Procedia PDF Downloads 1458284 The Twain Shall Meet: First Year Writing Skills in Senior Year Project Design
Authors: Sana Sayed
Abstract:
The words objectives, outcomes, and assessment are commonplace in academia. Educators, especially those who use their emotional intelligence as a useful teaching tool, strive to find creative and innovative ways to connect to their students while meeting the objectives, outcomes, and assessment measures for their respective courses. However, what happens to these outcomes once the objectives have been met, students have completed a specific course, and generic letter grades have been generated? How can their knowledge and acquired skills be assessed over the course of semesters, throughout their years of study, and until their final year right before they graduate? Considering the courses students complete for different departments in various disciplines, how can these outcomes be measured, or at least maintained, across the curriculum? This research-driven paper uses the key course outcomes of first year, required writing courses and traces them in two senior level, required civil engineering design courses at the American University of Sharjah, which is located in the United Arab Emirates. The purpose of this research is two-fold: (1) to assess specific learning outcomes using a case study that focuses on courses from two different disciplines during two very distinctive years of study, and (2) to demonstrate how learning across the curriculum fosters life-long proficiencies among graduating students that are aligned with a university’s mission statement.Keywords: assessment, learning across the curriculum, objectives, outcomes
Procedia PDF Downloads 3008283 Fuzzy Logic in Detecting Children with Behavioral Disorders
Authors: David G. Maxinez, Andrés Ferreyra Ramírez, Liliana Castillo Sánchez, Nancy Adán Mendoza, Carlos Aviles Cruz
Abstract:
This research describes the use of fuzzy logic in detection, assessment, analysis and evaluation of children with behavioral disorders. It shows how to acquire and analyze ambiguous, vague and full of uncertainty data coming from the input variables to get an accurate assessment result for each of the typologies presented by children with behavior problems. Behavior disorders analyzed in this paper are: hyperactivity (H), attention deficit with hyperactivity (DAH), conduct disorder (TD) and attention deficit (AD).Keywords: alteration, behavior, centroid, detection, disorders, economic, fuzzy logic, hyperactivity, impulsivity, social
Procedia PDF Downloads 562