Search results for: discriminate accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3846

Search results for: discriminate accuracy

2976 Fake Accounts Detection in Twitter Based on Minimum Weighted Feature Set

Authors: Ahmed ElAzab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny

Abstract:

Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, then the determined factors have been applied using different classification techniques, a comparison of the results for these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent research in the same area, this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts, moreover, the study can be applied on different Social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.

Keywords: fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques

Procedia PDF Downloads 414
2975 The Correlation between Three-Dimensional Implant Positions and Esthetic Outcomes of Single-Tooth Implant Restoration

Authors: Pongsakorn Komutpol, Pravej Serichetaphongse, Soontra Panmekiate, Atiphan Pimkhaokham

Abstract:

Statement of Problem: The important parameter of esthetic assessment in anterior maxillary implant include pink esthetic of gingiva and white esthetic of restoration. While the 3 dimensional (3D) implant position are recently concerned as a key for succeeding in implant treatment. However, to our knowledge, the authors did not come across any publication that demonstrated the relations of esthetic outcome and 3D implant position. Objectives: To investigate the correlation between positional accuracy of single-tooth implant restoration (STIR) in all 3 dimensions and their esthetic outcomes. Materials and Methods: 17 patients’ data who had a STIR at central incisor with pristine contralateral tooth were included in this study. Intraoral photographs, dental models, and cone beam computed tomography (CBCT) images were retrieved. The esthetic outcome was assessed in accordance with pink esthetic score and white esthetic score (PES/WES). While the number of correct position in each dimension (mesiodistal, labiolingual, apicocoronal) of the implant were evaluated and defined as 'right' or 'wrong' according to ITI consensus conference by one investigator using CBCT data. The different mean score between right and wrong position in all dimensions was analyzed by Mann-Whitney U test with 0.05 was the significant level of the study. Results: The average score of PES/WES was 15.88 ± 1.65 which was considered as clinically acceptable. The average PES/WES score in 1, 2 and 3 right dimension of the implant position were 16.71, 15.75 and 15.17 respectively. None of the implants placed wrongly in all three dimensions. Statistically significant difference of the PES/WES score was found between the implants that placed right in 3 dimensions and 1 dimension (p = 0.041). Conclusion: This study supported the principle of 3D position of implant. The more properly implant was placed, the higher esthetic outcome was found.

Keywords: accuracy, dental implant, esthetic, 3D implant position

Procedia PDF Downloads 177
2974 Application of Grey Theory in the Forecast of Facility Maintenance Hours for Office Building Tenants and Public Areas

Authors: Yen Chia-Ju, Cheng Ding-Ruei

Abstract:

This study took case office building as subject and explored the responsive work order repair request of facilities and equipment in offices and public areas by gray theory, with the purpose of providing for future related office building owners, executive managers, property management companies, mechanical and electrical companies as reference for deciding and assessing forecast model. Important conclusions of this study are summarized as follows according to the study findings: 1. Grey Relational Analysis discusses the importance of facilities repair number of six categories, namely, power systems, building systems, water systems, air conditioning systems, fire systems and manpower dispatch in order. In terms of facilities maintenance importance are power systems, building systems, water systems, air conditioning systems, manpower dispatch and fire systems in order. 2. GM (1,N) and regression method took maintenance hours as dependent variables and repair number, leased area and tenants number as independent variables and conducted single month forecast based on 12 data from January to December 2011. The mean absolute error and average accuracy of GM (1,N) from verification results were 6.41% and 93.59%; the mean absolute error and average accuracy of regression model were 4.66% and 95.34%, indicating that they have highly accurate forecast capability.

Keywords: rey theory, forecast model, Taipei 101, office buildings, property management, facilities, equipment

Procedia PDF Downloads 443
2973 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model

Authors: Seydou Sinde

Abstract:

The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.

Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression

Procedia PDF Downloads 82
2972 Artificial Intelligence in Bioscience: The Next Frontier

Authors: Parthiban Srinivasan

Abstract:

With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.

Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction

Procedia PDF Downloads 356
2971 Effectiveness of Adrenal Venous Sampling in the Management of Primary Aldosteronism: Single Centered Cohort Study at a Tertiary Care Hospital in Sri Lanka

Authors: Balasooriya B. M. C. M., Sujeeva N., Thowfeek Z., Siddiqa Omo, Liyanagunawardana J. E., Jayawardana Saiu, Manathunga S. S., Katulanda G. W.

Abstract:

Introduction and objectives: Adrenal venous sampling (AVS) is the gold standard to discriminate unilateral primary aldosteronism (UPA) from bilateral disease (BPA). AVS is technically demanding and only performed in a limited number of centers worldwide. To the best of our knowledge, Except for one study conducted in India, no other research studies on this area have been conducted in South Asia. This study aimed to evaluate the effectiveness of AVS in the management of primary aldosteronism. Methods: A total of 32 patients who underwent AVS at the National Hospital of Sri Lanka from April 2021 to April 2023 were enrolled. Demographic, clinical and laboratory data were obtained retrospectively. A procedure was considered successful when adequate cannulation of both adrenal veins was demonstrated. Cortisol gradient across the adrenal vein (AV) and the peripheral vein was used to establish the success of venous cannulation. Lateralization was determined by the aldosterone gradient between the two sides. Continuous and categorical variables were summarized with mean, SD, and proportions, respectively. The mean and standard deviation of the contralateral suppression index (CSI) were estimated with an intercept-only Bayesian inference model. Results: Of the 32 patients, the average age was 52.47 +26.14 and 19 (59.4%) were males. Both AVs were successfully cannulated in 12 (37.5%). Among them, lateralization was demonstrated in 11(91.7%), and one was diagnosed as a bilateral disease. There were no total failures. Right AV cannulation was unsuccessful in 18 (56.25%), of which lateralization was demonstrated in 9 (50%), and others were inconclusive. Left AV cannulation was unsuccessful only in 2 (6.25%); one was lateralized, and the other remained inconclusive. The estimated mean of the CSI was 0.33 (89% credible interval 0.11-0.86). Seven patients underwent unilateral adrenalectomy and demonstrated significant improvement in blood pressure during follow-up. Two patients await surgery. Others were treated medically. Conclusions: Despite failure due to procedural difficulties, AVS remained useful in the management of patients with PA. Moreover, the success of the procedure needs experienced hands and advanced equipment to achieve optimal outcomes in PA.

Keywords: adrenal venous sampling, lateralization, contralateral suppression index, primary aldosteronism

Procedia PDF Downloads 62
2970 Improvement plan for Integrity of Intensive Care Unit Patients Withdrawn from Life-Sustaining Medical Care

Authors: Shang-Sin Shiu, Shu-I Chin, Hsiu-Ju Chen, Ru-Yu Lien

Abstract:

The Hospice and Palliative Care Act has undergone three revisions, making it less challenging for terminal patients to withdraw life support systems. However, the adequacy of care before withdraw is a crucial factor in end-of-life medical treatment. The author observed that intensive care unit (ICU) nursing staff often rely on simple flowcharts or word of mouth, leading to inadequate preparation and failure to meet patient needs before withdraw. This results in confusion or hesitation among those executing the process. Therefore, there is a motivation to improve the withdraw of patient care processes, establish standardized procedures, ensure the accuracy of removal execution, enhance end-of-life care self-efficacy for nursing staff, and improve the overall quality of care. The investigation identified key issues: the lack of applicable guidelines for ICU care for withdraw from life-sustaining, insufficient education and training on withdraw and end-of-life care, scattered locations of withdraw-related tools, and inadequate self-efficacy in withdraw from life-sustaining care. Solutions proposed include revising withdraw care processes and guidelines, integrating tools and locations, conducting educational courses, and forming support groups. After the project implementation, the accuracy of removal cognition improved from 78% to 96.5%, self-efficacy in end-of-life care after removal increased from 54.7% to 93.1%, and the correctness of care behavior progressed from 27.7% to 97.8%. It is recommended to regularly conduct courses on removing life support system care and grief consolation to enhance the quality of end-of-life care.

Keywords: the intensive care unit (ICU) patients, nursing staff, withdraw life support systems, self-efficacy

Procedia PDF Downloads 50
2969 Technology of Gyro Orientation Measurement Unit (Gyro Omu) for Underground Utility Mapping Practice

Authors: Mohd Ruzlin Mohd Mokhtar

Abstract:

At present, most operators who are working on projects for utilities such as power, water, oil, gas, telecommunication and sewerage are using technologies e.g. Total station, Global Positioning System (GPS), Electromagnetic Locator (EML) and Ground Penetrating Radar (GPR) to perform underground utility mapping. With the increase in popularity of Horizontal Directional Drilling (HDD) method among the local authorities and asset owners, most of newly installed underground utilities need to use the HDD method. HDD method is seen as simple and create not much disturbance to the public and traffic. Thus, it was the preferred utilities installation method in most of areas especially in urban areas. HDDs were installed much deeper than exiting utilities (some reports saying that HDD is averaging 5 meter in depth). However, this impacts the accuracy or ability of existing underground utility mapping technologies. In most of Malaysia underground soil condition, those technologies were limited to maximum of 3 meter depth. Thus, those utilities which were installed much deeper than 3 meter depth could not be detected by using existing detection tools. The accuracy and reliability of existing underground utility mapping technologies or work procedure were in doubt. Thus, a mitigation action plan is required. While installing new utility using Horizontal Directional Drilling (HDD) method, a more accurate underground utility mapping can be achieved by using Gyro OMU compared to existing practice using e.g. EML and GPR. Gyro OMU is a method to accurately identify the location of HDD thus this mapping can be used or referred to avoid those cost of breakdown due to future HDD works which can be caused by inaccurate underground utility mapping.

Keywords: Gyro Orientation Measurement Unit (Gyro OMU), Horizontal Directional Drilling (HDD), Ground Penetrating Radar (GPR), Electromagnetic Locator (EML)

Procedia PDF Downloads 139
2968 Beam Spatio-Temporal Multiplexing Approach for Improving Control Accuracy of High Contrast Pulse

Authors: Ping Li, Bing Feng, Junpu Zhao, Xudong Xie, Dangpeng Xu, Kuixing Zheng, Qihua Zhu, Xiaofeng Wei

Abstract:

In laser driven inertial confinement fusion (ICF), the control of the temporal shape of the laser pulse is a key point to ensure an optimal interaction of laser-target. One of the main difficulties in controlling the temporal shape is the foot part control accuracy of high contrast pulse. Based on the analysis of pulse perturbation in the process of amplification and frequency conversion in high power lasers, an approach of beam spatio-temporal multiplexing is proposed to improve the control precision of high contrast pulse. In the approach, the foot and peak part of high contrast pulse are controlled independently, which propagate separately in the near field, and combine together in the far field to form the required pulse shape. For high contrast pulse, the beam area ratio of the two parts is optimized, and then beam fluence and intensity of the foot part are increased, which brings great convenience to the control of pulse. Meanwhile, the near field distribution of the two parts is also carefully designed to make sure their F-numbers are the same, which is another important parameter for laser-target interaction. The integrated calculation results show that for a pulse with a contrast of up to 500, the deviation of foot part can be improved from 20% to 5% by using beam spatio-temporal multiplexing approach with beam area ratio of 1/20, which is almost the same as that of peak part. The research results are expected to bring a breakthrough in power balance of high power laser facility.

Keywords: inertial confinement fusion, laser pulse control, beam spatio-temporal multiplexing, power balance

Procedia PDF Downloads 146
2967 Unsupervised Echocardiogram View Detection via Autoencoder-Based Representation Learning

Authors: Andrea Treviño Gavito, Diego Klabjan, Sanjiv J. Shah

Abstract:

Echocardiograms serve as pivotal resources for clinicians in diagnosing cardiac conditions, offering non-invasive insights into a heart’s structure and function. When echocardiographic studies are conducted, no standardized labeling of the acquired views is performed. Employing machine learning algorithms for automated echocardiogram view detection has emerged as a promising solution to enhance efficiency in echocardiogram use for diagnosis. However, existing approaches predominantly rely on supervised learning, necessitating labor-intensive expert labeling. In this paper, we introduce a fully unsupervised echocardiographic view detection framework that leverages convolutional autoencoders to obtain lower dimensional representations and the K-means algorithm for clustering them into view-related groups. Our approach focuses on discriminative patches from echocardiographic frames. Additionally, we propose a trainable inverse average layer to optimize decoding of average operations. By integrating both public and proprietary datasets, we obtain a marked improvement in model performance when compared to utilizing a proprietary dataset alone. Our experiments show boosts of 15.5% in accuracy and 9.0% in the F-1 score for frame-based clustering, and 25.9% in accuracy and 19.8% in the F-1 score for view-based clustering. Our research highlights the potential of unsupervised learning methodologies and the utilization of open-sourced data in addressing the complexities of echocardiogram interpretation, paving the way for more accurate and efficient cardiac diagnoses.

Keywords: artificial intelligence, echocardiographic view detection, echocardiography, machine learning, self-supervised representation learning, unsupervised learning

Procedia PDF Downloads 31
2966 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 276
2965 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 105
2964 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline

Authors: Kenan Morani, Esra Kaya Ayana

Abstract:

This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.

Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation

Procedia PDF Downloads 129
2963 Discriminating Between Energy Drinks and Sports Drinks Based on Their Chemical Properties Using Chemometric Methods

Authors: Robert Cazar, Nathaly Maza

Abstract:

Energy drinks and sports drinks are quite popular among young adults and teenagers worldwide. Some concerns regarding their health effects – particularly those of the energy drinks - have been raised based on scientific findings. Differentiating between these two types of drinks by means of their chemical properties seems to be an instructive task. Chemometrics provides the most appropriate strategy to do so. In this study, a discrimination analysis of the energy and sports drinks has been carried out applying chemometric methods. A set of eleven samples of available commercial brands of drinks – seven energy drinks and four sports drinks – were collected. Each sample was characterized by eight chemical variables (carbohydrates, energy, sugar, sodium, pH, degrees Brix, density, and citric acid). The data set was standardized and examined by exploratory chemometric techniques such as clustering and principal component analysis. As a preliminary step, a variable selection was carried out by inspecting the variable correlation matrix. It was detected that some variables are redundant, so they can be safely removed, leaving only five variables that are sufficient for this analysis. They are sugar, sodium, pH, density, and citric acid. Then, a hierarchical clustering `employing the average – linkage criterion and using the Euclidian distance metrics was performed. It perfectly separates the two types of drinks since the resultant dendogram, cut at the 25% similarity level, assorts the samples in two well defined groups, one of them containing the energy drinks and the other one the sports drinks. Further assurance of the complete discrimination is provided by the principal component analysis. The projection of the data set on the first two principal components – which retain the 71% of the data information – permits to visualize the distribution of the samples in the two groups identified in the clustering stage. Since the first principal component is the discriminating one, the inspection of its loadings consents to characterize such groups. The energy drinks group possesses medium to high values of density, citric acid, and sugar. The sports drinks group, on the other hand, exhibits low values of those variables. In conclusion, the application of chemometric methods on a data set that features some chemical properties of a number of energy and sports drinks provides an accurate, dependable way to discriminate between these two types of beverages.

Keywords: chemometrics, clustering, energy drinks, principal component analysis, sports drinks

Procedia PDF Downloads 105
2962 Improving Diagnostic Accuracy of Ankle Syndesmosis Injuries: A Comparison of Traditional Radiographic Measurements and Computed Tomography-Based Measurements

Authors: Yasar Samet Gokceoglu, Ayse Nur Incesu, Furkan Okatar, Berk Nimetoglu, Serkan Bayram, Turgut Akgul

Abstract:

Ankle syndesmosis injuries pose a significant challenge in orthopedic practice due to their potential for prolonged recovery and chronic ankle dysfunction. Accurate diagnosis and management of these injuries are essential for achieving optimal patient outcomes. The use of radiological methods, such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI), plays a vital role in the accurate diagnosis of syndesmosis injuries in the context of ankle fractures. Treatment options for ankle syndesmosis injuries vary, with surgical interventions such as screw fixation and suture-button implantation being commonly employed. The choice of treatment is influenced by the severity of the injury and the presence of associated fractures. Additionally, the mechanism of injury, such as pure syndesmosis injury or specific fracture types, can impact the stability and management of syndesmosis injuries. Ankle fractures with syndesmosis injury present a complex clinical scenario, requiring accurate diagnosis, appropriate reduction, and tailored management strategies. The interplay between the mechanism of injury, associated fractures, and treatment modalities significantly influences the outcomes of these challenging injuries. The long-term outcomes and patient satisfaction following ankle fractures with syndesmosis injury are crucial considerations in the field of orthopedics. Patient-reported outcome measures, such as the Foot and Ankle Outcome Score (FAOS), provide essential information about functional recovery and quality of life after these injuries. When diagnosing syndesmosis injuries, standard measurements, such as the medial clear space, tibiofibular overlap, tibiofibular clear space, anterior tibiofibular ratio (ATFR), and the anterior-posterior tibiofibular ratio (APTF), are assessed through radiographs and computed tomography (CT) scans. These parameters are critical in evaluating the presence and severity of syndesmosis injuries, enabling clinicians to choose the most appropriate treatment approach. Despite advancements in diagnostic imaging, challenges remain in accurately diagnosing and treating ankle syndesmosis injuries. Traditional diagnostic parameters, while beneficial, may not capture the full extent of the injury or provide sufficient information to guide therapeutic decisions. This gap highlights the need for exploring additional diagnostic parameters that could enhance the accuracy of syndesmosis injury diagnoses and inform treatment strategies more effectively. The primary goal of this research is to evaluate the usefulness of traditional radiographic measurements in comparison to new CT-based measurements for diagnosing ankle syndesmosis injuries. Specifically, this study aims to assess the accuracy of conventional parameters, including medial clear space, tibiofibular overlap, tibiofibular clear space, ATFR, and APTF, in contrast with the recently proposed CT-based measurements such as the delta and gamma angles. Moreover, the study intends to explore the relationship between these diagnostic parameters and functional outcomes, as measured by the Foot and Ankle Outcome Score (FAOS). Establishing a correlation between specific diagnostic measurements and FAOS scores will enable us to identify the most reliable predictors of functional recovery following syndesmosis injuries. This comparative analysis will provide valuable insights into the accuracy and dependability of CT-based measurements in diagnosing ankle syndesmosis injuries and their potential impact on predicting patient outcomes. The results of this study could greatly influence clinical practices by refining diagnostic criteria and optimizing treatment planning for patients with ankle syndesmosis injuries.

Keywords: ankle syndesmosis injury, diagnostic accuracy, computed tomography, radiographic measurements, Tibiofibular syndesmosis distance

Procedia PDF Downloads 72
2961 Automatic Furrow Detection for Precision Agriculture

Authors: Manpreet Kaur, Cheol-Hong Min

Abstract:

The increasing advancement in the robotics equipped with machine vision sensors applied to precision agriculture is a demanding solution for various problems in the agricultural farms. An important issue related with the machine vision system concerns crop row and weed detection. This paper proposes an automatic furrow detection system based on real-time processing for identifying crop rows in maize fields in the presence of weed. This vision system is designed to be installed on the farming vehicles, that is, submitted to gyros, vibration and other undesired movements. The images are captured under image perspective, being affected by above undesired effects. The goal is to identify crop rows for vehicle navigation which includes weed removal, where weeds are identified as plants outside the crop rows. The images quality is affected by different lighting conditions and gaps along the crop rows due to lack of germination and wrong plantation. The proposed image processing method consists of four different processes. First, image segmentation based on HSV (Hue, Saturation, Value) decision tree. The proposed algorithm used HSV color space to discriminate crops, weeds and soil. The region of interest is defined by filtering each of the HSV channels between maximum and minimum threshold values. Then the noises in the images were eliminated by the means of hybrid median filter. Further, mathematical morphological processes, i.e., erosion to remove smaller objects followed by dilation to gradually enlarge the boundaries of regions of foreground pixels was applied. It enhances the image contrast. To accurately detect the position of crop rows, the region of interest is defined by creating a binary mask. The edge detection and Hough transform were applied to detect lines represented in polar coordinates and furrow directions as accumulations on the angle axis in the Hough space. The experimental results show that the method is effective.

Keywords: furrow detection, morphological, HSV, Hough transform

Procedia PDF Downloads 229
2960 Supervised Machine Learning Approach for Studying the Effect of Different Joint Sets on Stability of Mine Pit Slopes Under the Presence of Different External Factors

Authors: Sudhir Kumar Singh, Debashish Chakravarty

Abstract:

Slope stability analysis is an important aspect in the field of geotechnical engineering. It is also important from safety, and economic point of view as any slope failure leads to loss of valuable lives and damage to property worth millions. This paper aims at mitigating the risk of slope failure by studying the effect of different joint sets on the stability of mine pit slopes under the influence of various external factors, namely degree of saturation, rainfall intensity, and seismic coefficients. Supervised machine learning approach has been utilized for making accurate and reliable predictions regarding the stability of slopes based on the value of Factor of Safety. Numerous cases have been studied for analyzing the stability of slopes using the popular Finite Element Method, and the data thus obtained has been used as training data for the supervised machine learning models. The input data has been trained on different supervised machine learning models, namely Random Forest, Decision Tree, Support vector Machine, and XGBoost. Distinct test data that is not present in training data has been used for measuring the performance and accuracy of different models. Although all models have performed well on the test dataset but Random Forest stands out from others due to its high accuracy of greater than 95%, thus helping us by providing a valuable tool at our disposition which is neither computationally expensive nor time consuming and in good accordance with the numerical analysis result.

Keywords: finite element method, geotechnical engineering, machine learning, slope stability

Procedia PDF Downloads 99
2959 Classifying Students for E-Learning in Information Technology Course Using ANN

Authors: Sirilak Areerachakul, Nat Ployong, Supayothin Na Songkla

Abstract:

This research’s objective is to select the model with most accurate value by using Neural Network Technique as a way to filter potential students who enroll in IT course by electronic learning at Suan Suanadha Rajabhat University. It is designed to help students selecting the appropriate courses by themselves. The result showed that the most accurate model was 100 Folds Cross-validation which had 73.58% points of accuracy.

Keywords: artificial neural network, classification, students, e-learning

Procedia PDF Downloads 424
2958 Device-integrated Micro-thermocouples for Reliable Temperature Measurement of GaN HEMTs

Authors: Hassan Irshad Bhatti, Saravanan Yuvaraja, Xiaohang Li

Abstract:

GaN-based devices, such as high electron mobility transistors (HEMTs), offer superior characteristics for high-power, high-frequency, and high-temperature applications [1]. However, this exceptional electrical performance is compromised by undesirable self-heating effects under high-power applications [2, 3]. Some of the issues caused by self-heating are current collapse, thermal runway and performance degradation [4, 5]. Therefore, accurate and reliable methods for measuring the temperature of individual devices on a chip are needed to monitor and control the thermal behavior of GaN-based devices [6]. Temperature measurement at the micro/nanoscale is a challenging task that requires specialized techniques such as Infrared microscopy, Raman thermometry, and thermoreflectance. Recently, micro-thermocouples (MTCs) have attracted considerable attention due to their advantages of simplicity, low cost, high sensitivity, and compatibility with standard fabrication processes [7, 8]. A micro-thermocouple is a junction of two different metal thin films, which generates a Seebeck voltage related to the temperature difference between a hot and cold zone. Integrating MTC in a device allows local temperature to be measured with high sensitivity and accuracy [9]. This work involves the fabrication and integration of micro-thermocouples (MTCs) to measure the channel temperature of GaN HEMT. Our fabricated MTC (Platinum-Chromium junction) has shown a sensitivity of 16.98 µV/K and can measure device channel temperature with high precision and accuracy. The temperature information obtained using this sensor can help improve GaN-based devices and provide thermal engineers with useful insights for optimizing their designs.

Keywords: Electrical Engineering, Thermal engineering, Power Devices, Semiconuctors

Procedia PDF Downloads 17
2957 Improvement of Microscopic Detection of Acid-Fast Bacilli for Tuberculosis by Artificial Intelligence-Assisted Microscopic Platform and Medical Image Recognition System

Authors: Hsiao-Chuan Huang, King-Lung Kuo, Mei-Hsin Lo, Hsiao-Yun Chou, Yusen Lin

Abstract:

The most robust and economical method for laboratory diagnosis of TB is to identify mycobacterial bacilli (AFB) under acid-fast staining despite its disadvantages of low sensitivity and labor-intensive. Though digital pathology becomes popular in medicine, an automated microscopic system for microbiology is still not available. A new AI-assisted automated microscopic system, consisting of a microscopic scanner and recognition program powered by big data and deep learning, may significantly increase the sensitivity of TB smear microscopy. Thus, the objective is to evaluate such an automatic system for the identification of AFB. A total of 5,930 smears was enrolled for this study. An intelligent microscope system (TB-Scan, Wellgen Medical, Taiwan) was used for microscopic image scanning and AFB detection. 272 AFB smears were used for transfer learning to increase the accuracy. Referee medical technicians were used as Gold Standard for result discrepancy. Results showed that, under a total of 1726 AFB smears, the automated system's accuracy, sensitivity and specificity were 95.6% (1,650/1,726), 87.7% (57/65), and 95.9% (1,593/1,661), respectively. Compared to culture, the sensitivity for human technicians was only 33.8% (38/142); however, the automated system can achieve 74.6% (106/142), which is significantly higher than human technicians, and this is the first of such an automated microscope system for TB smear testing in a controlled trial. This automated system could achieve higher TB smear sensitivity and laboratory efficiency and may complement molecular methods (eg. GeneXpert) to reduce the total cost for TB control. Furthermore, such an automated system is capable of remote access by the internet and can be deployed in the area with limited medical resources.

Keywords: TB smears, automated microscope, artificial intelligence, medical imaging

Procedia PDF Downloads 227
2956 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping

Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.

Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM

Procedia PDF Downloads 91
2955 Explainable Deep Learning for Neuroimaging: A Generalizable Approach for Differential Diagnosis of Brain Diseases

Authors: Nighat Bibi

Abstract:

The differential diagnosis of brain diseases by magnetic resonance imaging (MRI) is a crucial step in the diagnostic process, and deep learning (DL) has the potential to significantly improve the accuracy and efficiency of these diagnoses. This study focuses on creating an ensemble learning (EL) model that utilizes the ResNet50, DenseNet121, and EfficientNetB1 architectures to concurrently and accurately classify various brain conditions from MRI images. The proposed ensemble learning model identifies a range of brain disorders that encompass different types of brain tumors, as well as multiple sclerosis. The proposed model was trained on two open-source datasets, consisting of MRI images of glioma, meningioma, pituitary tumors, and multiple sclerosis. Central to this research is the integration of gradient-weighted class activation mapping (Grad-CAM) for model interpretability, aligning with the growing emphasis on explainable AI (XAI) in medical imaging. The application of Grad-CAM improves the transparency of the model's decision-making process, which is vital for clinical acceptance and trust in AI-assisted diagnostic tools. The EL model achieved an impressive 99.84% accuracy in classifying these various brain conditions, demonstrating its potential as a versatile and effective tool for differential diagnosis in neuroimaging. The model’s ability to distinguish between multiple brain diseases underscores its significant potential in the field of medical imaging. Additionally, Grad-CAM visualizations provide deeper insights into the neural network’s reasoning, contributing to a more transparent and interpretable AI-driven diagnostic process in neuroimaging.

Keywords: brain tumour, differential diagnosis, ensemble learning, explainability, grad-cam, multiple sclerosis

Procedia PDF Downloads 6
2954 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes

Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo

Abstract:

Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).

Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation

Procedia PDF Downloads 204
2953 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil

Authors: M. Seguini, D. Nedjar

Abstract:

An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.

Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability

Procedia PDF Downloads 412
2952 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques

Authors: Gizem Eser Erdek

Abstract:

This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.

Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet

Procedia PDF Downloads 75
2951 Advantages of Computer Navigation in Knee Arthroplasty

Authors: Mohammad Ali Al Qatawneh, Bespalchuk Pavel Ivanovich

Abstract:

Computer navigation has been introduced in total knee arthroplasty to improve the accuracy of the procedure. Computer navigation improves the accuracy of bone resection in the coronal and sagittal planes. It was also noted that it normalizes the rotational alignment of the femoral component and fully assesses and balances the deformation of soft tissues in the coronal plane. The work is devoted to the advantages of using computer navigation technology in total knee arthroplasty in 62 patients (11 men and 51 women) suffering from gonarthrosis, aged 51 to 83 years, operated using a computer navigation system, followed up to 3 years from the moment of surgery. During the examination, the deformity variant was determined, and radiometric parameters of the knee joints were measured using the Knee Society Score (KSS), Functional Knee Society Score (FKSS), and Western Ontario and McMaster University Osteoarthritis Index (WOMAC) scales. Also, functional stress tests were performed to assess the stability of the knee joint in the frontal plane and functional indicators of the range of motion. After surgery, improvement was observed in all scales; firstly, the WOMAC values decreased by 5.90 times, and the median value to 11 points (p < 0.001), secondly KSS increased by 3.91 times and reached 86 points (p < 0.001), and the third one is that FKSS data increased by 2.08 times and reached 94 points (p < 0.001). After TKA, the axis deviation of the lower limbs of more than 3 degrees was observed in 4 patients at 6.5% and frontal instability of the knee joint just in 2 cases at 3.2%., The lower incidence of sagittal instability of the knee joint after the operation was 9.6%. The range of motion increased by 1.25 times; the volume of movement averaged 125 degrees (p < 0.001). Computer navigation increases the accuracy of the spatial orientation of the endoprosthesis components in all planes, reduces the variability of the axis of the lower limbs within ± 3 °, allows you to achieve the best results of surgical interventions, and can be used to solve most basic tasks, allowing you to achieve excellent and good outcomes of operations in 100% of cases according to the WOMAC scale. With diaphyseal deformities of the femur and/or tibia, as well as with obstruction of their medullary canal, the use of computer navigation is the method of choice. The use of computer navigation prevents the occurrence of flexion contracture and hyperextension of the knee joint during the distal sawing of the femur. Using the navigation system achieves high-precision implantation for the endoprosthesis; in addition, it achieves an adequate balance of the ligaments, which contributes to the stability of the joint, reduces pain, and allows for the achievement of a good functional result of the treatment.

Keywords: knee joint, arthroplasty, computer navigation, advantages

Procedia PDF Downloads 89
2950 Computer-Aided Diagnosis System Based on Multiple Quantitative Magnetic Resonance Imaging Features in the Classification of Brain Tumor

Authors: Chih Jou Hsiao, Chung Ming Lo, Li Chun Hsieh

Abstract:

Brain tumor is not the cancer having high incidence rate, but its high mortality rate and poor prognosis still make it as a big concern. On clinical examination, the grading of brain tumors depends on pathological features. However, there are some weak points of histopathological analysis which can cause misgrading. For example, the interpretations can be various without a well-known definition. Furthermore, the heterogeneity of malignant tumors is a challenge to extract meaningful tissues under surgical biopsy. With the development of magnetic resonance imaging (MRI), tumor grading can be accomplished by a noninvasive procedure. To improve the diagnostic accuracy further, this study proposed a computer-aided diagnosis (CAD) system based on MRI features to provide suggestions of tumor grading. Gliomas are the most common type of malignant brain tumors (about 70%). This study collected 34 glioblastomas (GBMs) and 73 lower-grade gliomas (LGGs) from The Cancer Imaging Archive. After defining the region-of-interests in MRI images, multiple quantitative morphological features such as region perimeter, region area, compactness, the mean and standard deviation of the normalized radial length, and moment features were extracted from the tumors for classification. As results, two of five morphological features and three of four image moment features achieved p values of <0.001, and the remaining moment feature had p value <0.05. Performance of the CAD system using the combination of all features achieved the accuracy of 83.18% in classifying the gliomas into LGG and GBM. The sensitivity is 70.59% and the specificity is 89.04%. The proposed system can become a second viewer on clinical examinations for radiologists.

Keywords: brain tumor, computer-aided diagnosis, gliomas, magnetic resonance imaging

Procedia PDF Downloads 258
2949 Long-Term Subcentimeter-Accuracy Landslide Monitoring Using a Cost-Effective Global Navigation Satellite System Rover Network: Case Study

Authors: Vincent Schlageter, Maroua Mestiri, Florian Denzinger, Hugo Raetzo, Michel Demierre

Abstract:

Precise landslide monitoring with differential global navigation satellite system (GNSS) is well known, but technical or economic reasons limit its application by geotechnical companies. This study demonstrates the reliability and the usefulness of Geomon (Infrasurvey Sàrl, Switzerland), a stand-alone and cost-effective rover network. The system permits deploying up to 15 rovers, plus one reference station for differential GNSS. A dedicated radio communication links all the modules to a base station, where an embedded computer automatically provides all the relative positions (L1 phase, open-source RTKLib software) and populates an Internet server. Each measure also contains information from an internal inclinometer, battery level, and position quality indices. Contrary to standard GNSS survey systems, which suffer from a limited number of beacons that must be placed in areas with good GSM signal, Geomon offers greater flexibility and permits a real overview of the whole landslide with good spatial resolution. Each module is powered with solar panels, ensuring autonomous long-term recordings. In this study, we have tested the system on several sites in the Swiss mountains, setting up to 7 rovers per site, for an 18 month-long survey. The aim was to assess the robustness and the accuracy of the system in different environmental conditions. In one case, we ran forced blind tests (vertical movements of a given amplitude) and compared various session parameters (duration from 10 to 90 minutes). Then the other cases were a survey of real landslides sites using fixed optimized parameters. Sub centimetric-accuracy with few outliers was obtained using the best parameters (session duration of 60 minutes, baseline 1 km or less), with the noise level on the horizontal component half that of the vertical one. The performance (percent of aborting solutions, outliers) was reduced with sessions shorter than 30 minutes. The environment also had a strong influence on the percent of aborting solutions (ambiguity search problem), due to multiple reflections or satellites obstructed by trees and mountains. The length of the baseline (distance reference-rover, single baseline processing) reduced the accuracy above 1 km but had no significant effect below this limit. In critical weather conditions, the system’s robustness was limited: snow, avalanche, and frost-covered some rovers, including the antenna and vertically oriented solar panels, leading to data interruption; and strong wind damaged a reference station. The possibility of changing the sessions’ parameters remotely was very useful. In conclusion, the rover network tested provided the foreseen sub-centimetric-accuracy while providing a dense spatial resolution landslide survey. The ease of implementation and the fully automatic long-term survey were timesaving. Performance strongly depends on surrounding conditions, but short pre-measures should allow moving a rover to a better final placement. The system offers a promising hazard mitigation technique. Improvements could include data post-processing for alerts and automatic modification of the duration and numbers of sessions based on battery level and rover displacement velocity.

Keywords: GNSS, GSM, landslide, long-term, network, solar, spatial resolution, sub-centimeter.

Procedia PDF Downloads 110
2948 Low-Cost Parking Lot Mapping and Localization for Home Zone Parking Pilot

Authors: Hongbo Zhang, Xinlu Tang, Jiangwei Li, Chi Yan

Abstract:

Home zone parking pilot (HPP) is a fast-growing segment in low-speed autonomous driving applications. It requires the car automatically cruise around a parking lot and park itself in a range of up to 100 meters inside a recurrent home/office parking lot, which requires precise parking lot mapping and localization solution. Although Lidar is ideal for SLAM, the car OEMs favor a low-cost fish-eye camera based visual SLAM approach. Recent approaches have employed segmentation models to extract semantic features and improve mapping accuracy, but these AI models are memory unfriendly and computationally expensive, making deploying on embedded ADAS systems difficult. To address this issue, we proposed a new method that utilizes object detection models to extract robust and accurate parking lot features. The proposed method could reduce computational costs while maintaining high accuracy. Once combined with vehicles’ wheel-pulse information, the system could construct maps and locate the vehicle in real-time. This article will discuss in detail (1) the fish-eye based Around View Monitoring (AVM) with transparent chassis images as the inputs, (2) an Object Detection (OD) based feature point extraction algorithm to generate point cloud, (3) a low computational parking lot mapping algorithm and (4) the real-time localization algorithm. At last, we will demonstrate the experiment results with an embedded ADAS system installed on a real car in the underground parking lot.

Keywords: ADAS, home zone parking pilot, object detection, visual SLAM

Procedia PDF Downloads 66
2947 Forecasting Residential Water Consumption in Hamilton, New Zealand

Authors: Farnaz Farhangi

Abstract:

Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.

Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model

Procedia PDF Downloads 336