Search results for: diagnostic accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4596

Search results for: diagnostic accuracy

3306 Diffraction-Based Immunosensor for Dengue NS1 Virus

Authors: Harriet Jane R. Caleja, Joel I. Ballesteros, Florian R. Del Mundo

Abstract:

The dengue fever belongs to the world’s major cause of death, especially in the tropical areas. In the Philippines, the number of dengue cases during the first half of 2015 amounted to more than 50,000. In 2012, the total number of cases of dengue infection reached 132,046 of which 701 patients died. Dengue Nonstructural 1 virus (Dengue NS1 virus) is a recently discovered biomarker for the early detection of dengue virus. It is present in the serum of the dengue virus infected patients even during the earliest stages prior to the formation of dengue virus antibodies. A biosensor for the dengue detection using NS1 virus was developed for faster and accurate diagnostic tool. Biotinylated anti-dengue virus NS1 was used as the receptor for dengue virus NS1. Using the Diffractive Optics Technology (dotTM) technique, real time binding of the NS1 virus to the biotinylated anti-NS1 antibody is observed. The dot®-Avidin sensor recognizes the biotinylated anti-NS1 and this served as the capture molecule to the analyte, NS1 virus. The increase in the signal of the diffractive intensity signifies the binding of the capture and the analyte. The LOD was found to be 3.87 ng/mL while the LOQ is 12.9 ng/mL. The developed biosensor was also found to be specific for the NS1 virus.

Keywords: avidin-biotin, diffractive optics technology, immunosensor, NS1

Procedia PDF Downloads 322
3305 Optimization of Surface Coating on Magnetic Nanoparticles for Biomedical Applications

Authors: Xiao-Li Liu, Ling-Yun Zhao, Xing-Jie Liang, Hai-Ming Fan

Abstract:

Owing to their unique properties, magnetic nanoparticles have been used as diagnostic and therapeutic agents for biomedical applications. Highly monodispersed magnetic nanoparticles with controlled particle size and surface coating have been successfully synthesized as a model system to investigate the effect of surface coating on the T2 relaxivity and specific absorption rate (SAR) under an alternating magnetic field, respectively. Amongst, by using mPEG-g-PEI to solubilize oleic-acid capped 6 nm magnetic nanoparticles, the T2 relaxivity could be significantly increased by up to 4-fold as compared to PEG coated nanoparticles. Moreover, it largely enhances the cell uptake with a T2 relaxivity of 92.6 mM-1s-1 for in vitro cell MRI. As for hyperthermia agent, SAR value increase with the decreased thickness of PEG surface coating. By elaborate optimization of surface coating and particle size, a significant increase of SAR (up to 74%) could be achieved with a minimal variation on the saturation magnetization (<5%). The 19 nm magnetic nanoparticles with 2000 Da PEG exhibited the highest SAR of 930 W•g-1 among the samples, which can be maintained in various simulated physiological conditions. This systematic work provides a general strategy for the optimization of surface coating of magnetic core for high performance MRI contrast agent and hyperthermia agent.

Keywords: magnetic nanoparticles, magnetic hyperthermia, magnetic resonance imaging, surface modification

Procedia PDF Downloads 506
3304 Improvement of Brain Tumors Detection Using Markers and Boundaries Transform

Authors: Yousif Mohamed Y. Abdallah, Mommen A. Alkhir, Amel S. Algaddal

Abstract:

This was experimental study conducted to study segmentation of brain in MRI images using edge detection and morphology filters. For brain MRI images each film scanned using digitizer scanner then treated by using image processing program (MatLab), where the segmentation was studied. The scanned image was saved in a TIFF file format to preserve the quality of the image. Brain tissue can be easily detected in MRI image if the object has sufficient contrast from the background. We use edge detection and basic morphology tools to detect a brain. The segmentation of MRI images steps using detection and morphology filters were image reading, detection entire brain, dilation of the image, filling interior gaps inside the image, removal connected objects on borders and smoothen the object (brain). The results of this study were that it showed an alternate method for displaying the segmented object would be to place an outline around the segmented brain. Those filters approaches can help in removal of unwanted background information and increase diagnostic information of Brain MRI.

Keywords: improvement, brain, matlab, markers, boundaries

Procedia PDF Downloads 511
3303 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 527
3302 Based on MR Spectroscopy, Metabolite Ratio Analysis of MRI Images for Metastatic Lesion

Authors: Hossain A, Hossain S.

Abstract:

Introduction: In a small cohort, we sought to assess the magnetic resonance spectroscopy's (MRS) ability to predict the presence of metastatic lesions. Method: A Popular Diagnostic Centre Limited enrolled patients with neuroepithelial tumors. The 1H CSI MRS of the brain allows us to detect changes in the concentration of specific metabolites caused by metastatic lesions. Among these metabolites are N-acetyl-aspartate (NNA), creatine (Cr), and choline (Cho). For Cho, NAA, Cr, and Cr₂, the metabolic ratio was calculated using the division method. Results: The NAA values were 0.63 and 5.65 for tumor cells, 1.86 and 5.66 for normal cells, and 1.86 and 5.66 for normal cells 2. NAA values for normal cells 1 were 1.84, 10.6, and 1.86 for normal cells 2, respectively. Cho levels were as low as 0.8 and 10.53 in the tumor cell, compared to 1.12 and 2.7 in the normal cell 1 and 1.24 and 6.36 in the normal cell 2. Cho/Cr₂ barely distinguished itself from the other ratios in terms of significance. For tumor cells, the ratios of Cho/NAA, Cho/Cr₂, NAA/Cho, and NAA/Cr₂ were significant. Normal cell 1 had significant Cho/NAA, Cho/Cr, NAA/Cho, and NAA/Cr ratios. Conclusion: The clinical result can be improved by using 1H-MRSI to guide the size of resection for metastatic lesions. Even though it is non-invasive and doesn't present any difficulties during the procedure, MRS has been shown to predict the detection of metastatic lesions.

Keywords: metabolite ratio, MRI images, metastatic lesion, MR spectroscopy, N-acetyl-aspartate

Procedia PDF Downloads 91
3301 Longitudinal Analysis of Internet Speed Data in the Gulf Cooperation Council Region

Authors: Musab Isah

Abstract:

This paper presents a longitudinal analysis of Internet speed data in the Gulf Cooperation Council (GCC) region, focusing on the most populous cities of each of the six countries – Riyadh, Saudi Arabia; Dubai, UAE; Kuwait City, Kuwait; Doha, Qatar; Manama, Bahrain; and Muscat, Oman. The study utilizes data collected from the Measurement Lab (M-Lab) infrastructure over a five-year period from January 1, 2019, to December 31, 2023. The analysis includes downstream and upstream throughput data for the cities, covering significant events such as the launch of 5G networks in 2019, COVID-19-induced lockdowns in 2020 and 2021, and the subsequent recovery period and return to normalcy. The results showcase substantial increases in Internet speeds across the cities, highlighting improvements in both download and upload throughput over the years. All the GCC countries have achieved above-average Internet speeds that can conveniently support various online activities and applications with excellent user experience.

Keywords: internet data science, internet performance measurement, throughput analysis, internet speed, measurement lab, network diagnostic tool

Procedia PDF Downloads 55
3300 Microarrays: Wide Clinical Utilities and Advances in Healthcare

Authors: Salma M. Wakil

Abstract:

Advances in the field of genetics overwhelmed detecting large number of inherited disorders at the molecular level and directed to the development of innovative technologies. These innovations have led to gene sequencing, prenatal mutation detection, pre-implantation genetic diagnosis; population based carrier screening and genome wide analyses using microarrays. Microarrays are widely used in establishing clinical and diagnostic setup for genetic anomalies at a massive level, with the advent of cytoscan molecular karyotyping as a clinical utility card for detecting chromosomal aberrations with high coverage across the entire human genome. Unlike a regular karyotype that relies on the microscopic inspection of chromosomes, molecular karyotyping with cytoscan constructs virtual chromosomes based on the copy number analysis of DNA which improves its resolution by 100-fold. We have been investigating a large number of patients with Developmental Delay and Intellectual disability with this platform for establishing micro syndrome deletions and have detected number of novel CNV’s in the Arabian population with the clinical relevance.

Keywords: microarrays, molecular karyotyping, developmental delay, genetics

Procedia PDF Downloads 451
3299 A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment

Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment.

Keywords: dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM

Procedia PDF Downloads 113
3298 Collaboration During Planning and Reviewing in Writing: Effects on L2 Writing

Authors: Amal Sellami, Ahlem Ammar

Abstract:

Writing is acknowledged to be a cognitively demanding and complex task. Indeed, the writing process is composed of three iterative sub-processes, namely planning, translating (writing), and reviewing. Not only do second or foreign language learners need to write according to this process, but they also need to respect the norms and rules of language and writing in the text to-be-produced. Accordingly, researchers have suggested to approach writing as a collaborative task in order to al leviate its complexity. Consequently, collaboration has been implemented during the whole writing process or only during planning orreviewing. Researchers report that implementing collaboration during the whole process might be demanding in terms of time in comparison to individual writing tasks. Consequently, because of time constraints, teachers may avoid it. For this reason, it might be pedagogically more realistic to limit collaboration to one of the writing sub-processes(i.e., planning or reviewing). However, previous research implementing collaboration in planning or reviewing is limited and fails to explore the effects of the seconditionson the written text. Consequently, the present study examines the effects of collaboration in planning and collaboration in reviewing on the written text. To reach this objective, quantitative as well as qualitative methods are deployed to examine the written texts holistically and in terms of fluency, complexity, and accuracy. Participants of the study include 4 pairs in each group (n=8). They participated in two experimental conditions, which are: (1) collaborative planning followed by individual writing and individual reviewing and (2) individual planning followed by individual writing and collaborative reviewing. The comparative research findings indicate that while collaborative planning resulted in better overall text quality (precisely better content and organization ratings), better fluency, better complexity, and fewer lexical errors, collaborative reviewing produces better accuracy and less syntactical and mechanical errors. The discussion of the findings suggests the need to conduct more comparative research in order to further explore the effects of collaboration in planning or in reviewing. Pedagogical implications of the current study include advising teachers to choose between implementing collaboration in planning or in reviewing depending on their students’ need and what they need to improve.

Keywords: collaboration, writing, collaborative planning, collaborative reviewing

Procedia PDF Downloads 94
3297 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 339
3296 Evaluation of the Diagnostic Potential of IL-2 as Biomarker for the Discrimination of Active and Latent Tuberculosis

Authors: Shima Mahmoudi, Setareh Mamishi, Babak Pourakbari, Majid Marjani

Abstract:

In the last years, the potential role of distinct T-cell subsets as biomarkers of active tuberculosis TB and/or latent tuberculosis infection (LTBI) has been studied. The aim of this study was to investigate the potential role of interleukin-2 (IL-2) in whole blood stimulated with M. tuberculosis-specific antigens in the QuantiFERON-TB Gold In Tube (QFT-G-IT) for the discrimination of active and latent tuberculosis. After 72-h of stimulation by antigens from the QFT-G-IT assay, IL-2 secretion was quantitated in supernatants by using ELISA (Mabtech AB, Sweden). Observing the level of IL-2 released after 72-h of incubation, we found that the level of IL-2 were significantly higher in LTBI group than in patients with active TB infection or control group (P value=0.019, Kruskal–Wallis test). The discrimination performance (assessed by the area under ROC curve) between LTBI and patients with active TB was 0.816 (95%CI: 0.72-0.97). Maximum discrimination was reached at a cut-off of 13.9 pg/mL for IL-2 following stimulation with 82% sensitivity and 86% specificity. In conclusion, although cytokine analysis has greatly contributed to the understanding of TB pathogenesis, data on cytokine profiles that might distinguish progression from latency of TB infection are scarce and even controversial. Our data indicate that the concomitant evaluation of IFN- γ and IL-2 could be instrumental in discriminating of active and latent TB infection.

Keywords: interleukin-2, discrimination, active TB, latent TB

Procedia PDF Downloads 405
3295 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: carbon stock, forest inventory, LiDAR, tree count

Procedia PDF Downloads 381
3294 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 99
3293 Choosing an Optimal Epsilon for Differentially Private Arrhythmia Analysis

Authors: Arin Ghazarian, Cyril Rakovski

Abstract:

Differential privacy has become the leading technique to protect the privacy of individuals in a database while allowing useful analysis to be done and the results to be shared. It puts a guarantee on the amount of privacy loss in the worst-case scenario. Differential privacy is not a toggle between full privacy and zero privacy. It controls the tradeoff between the accuracy of the results and the privacy loss using a single key parameter called

Keywords: arrhythmia, cardiology, differential privacy, ECG, epsilon, medi-cal data, privacy preserving analytics, statistical databases

Procedia PDF Downloads 144
3292 A Hybrid Model of Structural Equation Modelling-Artificial Neural Networks: Prediction of Influential Factors on Eating Behaviors

Authors: Maryam Kheirollahpour, Mahmoud Danaee, Amir Faisal Merican, Asma Ahmad Shariff

Abstract:

Background: The presence of nonlinearity among the risk factors of eating behavior causes a bias in the prediction models. The accuracy of estimation of eating behaviors risk factors in the primary prevention of obesity has been established. Objective: The aim of this study was to explore the potential of a hybrid model of structural equation modeling (SEM) and Artificial Neural Networks (ANN) to predict eating behaviors. Methods: The Partial Least Square-SEM (PLS-SEM) and a hybrid model (SEM-Artificial Neural Networks (SEM-ANN)) were applied to evaluate the factors affecting eating behavior patterns among university students. 340 university students participated in this study. The PLS-SEM analysis was used to check the effect of emotional eating scale (EES), body shape concern (BSC), and body appreciation scale (BAS) on different categories of eating behavior patterns (EBP). Then, the hybrid model was conducted using multilayer perceptron (MLP) with feedforward network topology. Moreover, Levenberg-Marquardt, which is a supervised learning model, was applied as a learning method for MLP training. The Tangent/sigmoid function was used for the input layer while the linear function applied for the output layer. The coefficient of determination (R²) and mean square error (MSE) was calculated. Results: It was proved that the hybrid model was superior to PLS-SEM methods. Using hybrid model, the optimal network happened at MPLP 3-17-8, while the R² of the model was increased by 27%, while, the MSE was decreased by 9.6%. Moreover, it was found that which one of these factors have significantly affected on healthy and unhealthy eating behavior patterns. The p-value was reported to be less than 0.01 for most of the paths. Conclusion/Importance: Thus, a hybrid approach could be suggested as a significant methodological contribution from a statistical standpoint, and it can be implemented as software to be able to predict models with the highest accuracy.

Keywords: hybrid model, structural equation modeling, artificial neural networks, eating behavior patterns

Procedia PDF Downloads 147
3291 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis

Authors: Mehrnaz Mostafavi

Abstract:

The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.

Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans

Procedia PDF Downloads 85
3290 The Importance of Development in Laboratory Diagnosis at the Intersection

Authors: Agus Sahri, Cahya Putra Dinata, Faishal Andhi Rokhman

Abstract:

Intersection is a critical area on a highway which is a place of conflict points and congestion due to the meeting of two or more roads. Conflicts that occur at the intersection include diverging, merging, weaving, and crossing. To deal with these conflicts, a crossing control system is needed, at a plot of intersection there are two control systems namely signal intersections and non-signalized intersections. The control system at a plot of intersection can affect the intersection performance. In Indonesia there are still many intersections with poor intersection performance. In analyzing the parameters to measure the performance of a plot of intersection in Indonesia, it is guided by the 1997 Indonesian Road Capacity Manual. For this reason, this study aims to develop laboratory diagnostics at plot intersections to analyze parameters that can affect the performance of an intersection. The research method used is research and development. The laboratory diagnosis includes anamnesis, differential diagnosis, inspection, diagnosis, prognosis, specimens, analysis and sample data analysts. It is expected that this research can encourage the development and application of laboratory diagnostics at a plot of intersection in Indonesia so that intersections can function optimally.

Keywords: intersection, the laboratory diagnostic, control systems, Indonesia

Procedia PDF Downloads 180
3289 Classification Using Worldview-2 Imagery of Giant Panda Habitat in Wolong, Sichuan Province, China

Authors: Yunwei Tang, Linhai Jing, Hui Li, Qingjie Liu, Xiuxia Li, Qi Yan, Haifeng Ding

Abstract:

The giant panda (Ailuropoda melanoleuca) is an endangered species, mainly live in central China, where bamboos act as the main food source of wild giant pandas. Knowledge of spatial distribution of bamboos therefore becomes important for identifying the habitat of giant pandas. There have been ongoing studies for mapping bamboos and other tree species using remote sensing. WorldView-2 (WV-2) is the first high resolution commercial satellite with eight Multi-Spectral (MS) bands. Recent studies demonstrated that WV-2 imagery has a high potential in classification of tree species. The advanced classification techniques are important for utilising high spatial resolution imagery. It is generally agreed that object-based image analysis is a more desirable method than pixel-based analysis in processing high spatial resolution remotely sensed data. Classifiers that use spatial information combined with spectral information are known as contextual classifiers. It is suggested that contextual classifiers can achieve greater accuracy than non-contextual classifiers. Thus, spatial correlation can be incorporated into classifiers to improve classification results. The study area is located at Wuyipeng area in Wolong, Sichuan Province. The complex environment makes it difficult for information extraction since bamboos are sparsely distributed, mixed with brushes, and covered by other trees. Extensive fieldworks in Wuyingpeng were carried out twice. The first one was on 11th June, 2014, aiming at sampling feature locations for geometric correction and collecting training samples for classification. The second fieldwork was on 11th September, 2014, for the purposes of testing the classification results. In this study, spectral separability analysis was first performed to select appropriate MS bands for classification. Also, the reflectance analysis provided information for expanding sample points under the circumstance of knowing only a few. Then, a spatially weighted object-based k-nearest neighbour (k-NN) classifier was applied to the selected MS bands to identify seven land cover types (bamboo, conifer, broadleaf, mixed forest, brush, bare land, and shadow), accounting for spatial correlation within classes using geostatistical modelling. The spatially weighted k-NN method was compared with three alternatives: the traditional k-NN classifier, the Support Vector Machine (SVM) method and the Classification and Regression Tree (CART). Through field validation, it was proved that the classification result obtained using the spatially weighted k-NN method has the highest overall classification accuracy (77.61%) and Kappa coefficient (0.729); the producer’s accuracy and user’s accuracy achieve 81.25% and 95.12% for the bamboo class, respectively, also higher than the other methods. Photos of tree crowns were taken at sample locations using a fisheye camera, so the canopy density could be estimated. It is found that it is difficult to identify bamboo in the areas with a large canopy density (over 0.70); it is possible to extract bamboos in the areas with a median canopy density (from 0.2 to 0.7) and in a sparse forest (canopy density is less than 0.2). In summary, this study explores the ability of WV-2 imagery for bamboo extraction in a mountainous region in Sichuan. The study successfully identified the bamboo distribution, providing supporting knowledge for assessing the habitats of giant pandas.

Keywords: bamboo mapping, classification, geostatistics, k-NN, worldview-2

Procedia PDF Downloads 309
3288 Application of Envelope Spectrum Analysis and Spectral Kurtosis to Diagnose Debris Fault in Bearing Using Acoustic Signals

Authors: Henry Ogbemudia Omoregbee, Mabel Usunobun Olanipekun

Abstract:

Debris fault diagnosis based on acoustic signals in rolling element bearing running at low speed and high radial loads are more of low amplitudes, particularly in the case of debris faults whose signals necessitate high sensitivity analyses. As the rollers in the bearing roll over debris trapped in grease used to lubricate the bearings, the envelope signal created by amplitude demodulation carries additional diagnostic information that is not available through ordinary spectrum analysis of the raw signal. The kurtosis value obtained for three different scenarios (debris induced, outer crack induced, and a normal good bearing) couldn't be used to easily identify whether the used bearings were defective or not. It was established in this work that the envelope spectrum analysis detected the fault signature and its harmonics induced in the debris bearings when bandpass filtering of the raw signal with the frequency band specified by kurtogram and spectral kurtosis was made.

Keywords: rolling bearings, rolling element bearing noise, bandpass filtering, harmonics, envelope spectrum analysis, spectral kurtosis

Procedia PDF Downloads 79
3287 Applicability of Linearized Model of Synchronous Generator for Power System Stability Analysis

Authors: J. Ritonja, B. Grcar

Abstract:

For the synchronous generator simulation and analysis and for the power system stabilizer design and synthesis a mathematical model of synchronous generator is needed. The model has to accurately describe dynamics of oscillations, while at the same time has to be transparent enough for an analysis and sufficiently simplified for design of control system. To study the oscillations of the synchronous generator against to the rest of the power system, the model of the synchronous machine connected to an infinite bus through a transmission line having resistance and inductance is needed. In this paper, the linearized reduced order dynamic model of the synchronous generator connected to the infinite bus is presented and analysed in details. This model accurately describes dynamics of the synchronous generator only in a small vicinity of an equilibrium state. With the digression from the selected equilibrium point the accuracy of this model is decreasing considerably. In this paper, the equations’ descriptions and the parameters’ determinations for the linearized reduced order mathematical model of the synchronous generator are explained and summarized and represent the useful origin for works in the areas of synchronous generators’ dynamic behaviour analysis and synchronous generator’s control systems design and synthesis. The main contribution of this paper represents the detailed analysis of the accuracy of the linearized reduced order dynamic model in the entire synchronous generator’s operating range. Borders of the areas where the linearized reduced order mathematical model represents accurate description of the synchronous generator’s dynamics are determined with the systemic numerical analysis. The thorough eigenvalue analysis of the linearized models in the entire operating range is performed. In the paper, the parameters of the linearized reduced order dynamic model of the laboratory salient poles synchronous generator were determined and used for the analysis. The theoretical conclusions were confirmed with the agreement of experimental and simulation results.

Keywords: eigenvalue analysis, mathematical model, power system stability, synchronous generator

Procedia PDF Downloads 240
3286 Agile Software Effort Estimation Using Regression Techniques

Authors: Mikiyas Adugna

Abstract:

Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.

Keywords: agile software development, effort estimation, elastic net regression, LASSO

Procedia PDF Downloads 61
3285 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review

Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha

Abstract:

Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.

Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text

Procedia PDF Downloads 113
3284 Feasibility Study on Developing and Enhancing of Flood Forecasting and Warning Systems in Thailand

Authors: Sitarrine Thongpussawal, Dasarath Jayasuriya, Thanaroj Woraratprasert, Sakawtree Prajamwong

Abstract:

Thailand grapples with recurrent floods causing substantial repercussions on its economy, society, and environment. In 2021, the economic toll of these floods amounted to an estimated 53,282 million baht, primarily impacting the agricultural sector. The existing flood monitoring system in Thailand suffers from inaccuracies and insufficient information, resulting in delayed warnings and ineffective communication to the public. The Office of the National Water Resources (OWNR) is tasked with developing and integrating data and information systems for efficient water resources management, yet faces challenges in monitoring accuracy, forecasting, and timely warnings. This study endeavors to evaluate the viability of enhancing Thailand's Flood Forecasting and Warning (FFW) systems. Additionally, it aims to formulate a comprehensive work package grounded in international best practices to enhance the country's FFW systems. Employing qualitative research methodologies, the study conducted in-depth interviews and focus groups with pertinent agencies. Data analysis involved techniques like note-taking and document analysis. The study substantiates the feasibility of developing and enhancing FFW systems in Thailand. Implementation of international best practices can augment the precision of flood forecasting and warning systems, empowering local agencies and residents in high-risk areas to prepare proactively, thereby minimizing the adverse impact of floods on lives and property. This research underscores that Thailand can feasibly advance its FFW systems by adopting international best practices, enhancing accuracy, and improving preparedness. Consequently, the study enriches the theoretical understanding of flood forecasting and warning systems and furnishes valuable recommendations for their enhancement in Thailand.

Keywords: flooding, forecasting, warning, monitoring, communication, Thailand

Procedia PDF Downloads 57
3283 ADHD: Assessment of Pragmatic Skills in Adults

Authors: Elena Even-Simkin

Abstract:

Attention Deficit Hyperactivity Disorder (ADHD) is one of the most frequently diagnosed disorders in children, but in many cases, the diagnosis is not provided until adulthood. Diagnosing adults with ADHD faces different obstacles due to numerous factors, such as educational or under-resourced familial environment, high intelligence compensating for stress-inducing difficulties, and additional comorbidities. Undiagnosed children and adolescents with ADHD may become undiagnosed adults with ADHD, who miss out on the early treatment and may experience significant social and pragmatic difficulties, leading to functional problems that subsequently affect their lifestyle, education, and occupational functioning. The proposed study presents a cost-effective and unique consideration of the pragmatic aspect among adults with ADHD. It provides a systematic and standardized evaluation of the pragmatic level in adults with ADHD, based on a comprehensive approach introduced by Arcara & Bambini (2016) for the assessment of pragmatic abilities in neuro-typical individuals. This assessment tool can promote the inclusion of pragmatic skills in the cognitive profile in the diagnostic practice of ADHD, and, thus, the proposed instrument can help not only identify the pragmatic difficulties in the ADHD population but also advance effective intervention programs that specifically focus on pragmatic skills in the targeted population.

Keywords: ADHD, adults, assessment, pragmatics

Procedia PDF Downloads 71
3282 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping

Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello

Abstract:

Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.

Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration

Procedia PDF Downloads 163
3281 Mapping Potential Soil Salinization Using Rule Based Object Oriented Image Analysis

Authors: Zermina Q., Wasif Y., Naeem S., Urooj S., Sajid R. A.

Abstract:

Land degradation, a leading environemtnal problem and a decrease in the quality of land has become a major global issue, caused by human activities. By land degradation, more than half of the world’s drylands are affected. The worldwide scope of main saline soils is approximately 955 M ha, whereas inferior salinization affected approximately 77 M ha. In irrigated areas, a total of 58% of these soils is found. As most of the vegetation types requires fertile soil for their growth and quality production, salinity causes serious problem to the production of these vegetation types and agriculture demands. This research aims to identify the salt affected areas in the selected part of Indus Delta, Sindh province, Pakistan. This particular mangroves dominating coastal belt is important to the local community for their crop growth. Object based image analysis approach has been adopted on Landsat TM imagery of year 2011 by incorporating different mathematical band ratios, thermal radiance and salinity index. Accuracy assessment of developed salinity landcover map was performed using Erdas Imagine Accuracy Assessment Utility. Rain factor was also considered before acquiring satellite imagery and conducting field survey, as wet soil can greatly affect the condition of saline soil of the area. Dry season considered best for the remote sensing based observation and monitoring of the saline soil. These areas were trained with the ground truth data w.r.t pH and electric condutivity of the soil samples. The results were obtained from the object based image analysis of Keti bunder and Kharo chan shows most of the region under low saline soil.Total salt affected soil was measured to be 46,581.7 ha in Keti Bunder, which represents 57.81 % of the total area of 80,566.49 ha. High Saline Area was about 7,944.68 ha (9.86%). Medium Saline Area was about 17,937.26 ha (22.26 %) and low Saline Area was about 20,699.77 ha (25.69%). Where as total salt affected soil was measured to be 52,821.87 ha in Kharo Chann, which represents 55.87 % of the total area of 94,543.54 ha. High Saline Area was about 5,486.55 ha (5.80 %). Medium Saline Area was about 13,354.72 ha (14.13 %) and low Saline Area was about 33980.61 ha (35.94 %). These results show that the area is low to medium saline in nature. Accuracy of the soil salinity map was found to be 83 % with the Kappa co-efficient of 0.77. From this research, it was evident that this area as a whole falls under the category of low to medium saline area and being close to coastal area, mangrove forest can flourish. As Mangroves are salt tolerant plant so this area is consider heaven for mangrove plantation. It would ultimately benefit both the local community and the environment. Increase in mangrove forest control the problem of soil salinity and prevent sea water to intrude more into coastal area. So deforestation of mangrove should be regularly monitored.

Keywords: indus delta, object based image analysis, soil salinity, thematic mapper

Procedia PDF Downloads 614
3280 Comparison and Improvement of the Existing Cone Penetration Test Results: Shear Wave Velocity Correlations for Hungarian Soils

Authors: Ákos Wolf, Richard P. Ray

Abstract:

Due to the introduction of Eurocode 8, the structural design for seismic and dynamic effects has become more significant in Hungary. This has emphasized the need for more effort to describe the behavior of structures under these conditions. Soil conditions have a significant effect on the response of structures by modifying the stiffness and damping of the soil-structural system and by modifying the seismic action as it reaches the ground surface. Shear modulus (G) and shear wave velocity (vs), which are often measured in the field, are the fundamental dynamic soil properties for foundation vibration problems, liquefaction potential and earthquake site response analysis. There are several laboratory and in-situ measurement techniques to evaluate dynamic soil properties, but unfortunately, they are often too expensive for general design practice. However, a significant number of correlations have been proposed to determine shear wave velocity or shear modulus from Cone Penetration Tests (CPT), which are used more and more in geotechnical design practice in Hungary. This allows the designer to analyze and compare CPT and seismic test result in order to select the best correlation equations for Hungarian soils and to improve the recommendations for the Hungarian geologic conditions. Based on a literature review, as well as research experience in Hungary, the influence of various parameters on the accuracy of results will be shown. This study can serve as a basis for selecting and modifying correlation equations for Hungarian soils. Test data are taken from seven locations in Hungary with similar geologic conditions. The shear wave velocity values were measured by seismic CPT. Several factors are analyzed including soil type, behavior index, measurement depth, geologic age etc. for their effect on the accuracy of predictions. The final results show an improved prediction method for Hungarian soils

Keywords: CPT correlation, dynamic soil properties, seismic CPT, shear wave velocity

Procedia PDF Downloads 244
3279 Data Mining Model for Predicting the Status of HIV Patients during Drug Regimen Change

Authors: Ermias A. Tegegn, Million Meshesha

Abstract:

Human Immunodeficiency Virus and Acquired Immunodeficiency Syndrome (HIV/AIDS) is a major cause of death for most African countries. Ethiopia is one of the seriously affected countries in sub Saharan Africa. Previously in Ethiopia, having HIV/AIDS was almost equivalent to a death sentence. With the introduction of Antiretroviral Therapy (ART), HIV/AIDS has become chronic, but manageable disease. The study focused on a data mining technique to predict future living status of HIV/AIDS patients at the time of drug regimen change when the patients become toxic to the currently taking ART drug combination. The data is taken from University of Gondar Hospital ART program database. Hybrid methodology is followed to explore the application of data mining on ART program dataset. Data cleaning, handling missing values and data transformation were used for preprocessing the data. WEKA 3.7.9 data mining tools, classification algorithms, and expertise are utilized as means to address the research problem. By using four different classification algorithms, (i.e., J48 Classifier, PART rule induction, Naïve Bayes and Neural network) and by adjusting their parameters thirty-two models were built on the pre-processed University of Gondar ART program dataset. The performances of the models were evaluated using the standard metrics of accuracy, precision, recall, and F-measure. The most effective model to predict the status of HIV patients with drug regimen substitution is pruned J48 decision tree with a classification accuracy of 98.01%. This study extracts interesting attributes such as Ever taking Cotrim, Ever taking TbRx, CD4 count, Age, Weight, and Gender so as to predict the status of drug regimen substitution. The outcome of this study can be used as an assistant tool for the clinician to help them make more appropriate drug regimen substitution. Future research directions are forwarded to come up with an applicable system in the area of the study.

Keywords: HIV drug regimen, data mining, hybrid methodology, predictive model

Procedia PDF Downloads 140
3278 Lake Water Surface Variations and Its Influencing Factors in Tibetan Plateau in Recent 10 Years

Authors: Shanlong Lu, Jiming Jin, Xiaochun Wang

Abstract:

The Tibetan Plateau has the largest number of inland lakes with the highest elevation on the planet. These massive and large lakes are mostly in natural state and are less affected by human activities. Their shrinking or expansion can truly reflect regional climate and environmental changes and are sensitive indicators of global climate change. However, due to the sparsely populated nature of the plateau and the poor natural conditions, it is difficult to effectively obtain the change data of the lake, which has affected people's understanding of the temporal and spatial processes of lake water changes and their influencing factors. By using the MODIS (Moderate Resolution Imaging Spectroradiometer) MOD09Q1 surface reflectance images as basic data, this study produced the 8-day lake water surface data set of the Tibetan Plateau from 2000 to 2012 at 250 m spatial resolution, with a lake water surface extraction method of combined with lake water surface boundary buffer analyzing and lake by lake segmentation threshold determining. Then based on the dataset, the lake water surface variations and their influencing factors were analyzed, by using 4 typical natural geographical zones of Eastern Qinghai and Qilian, Southern Qinghai, Qiangtang, and Southern Tibet, and the watersheds of the top 10 lakes of Qinghai, Siling Co, Namco, Zhari NamCo, Tangra Yumco, Ngoring, UlanUla, Yamdrok Tso, Har and Gyaring as the analysis units. The accuracy analysis indicate that compared with water surface data of the 134 sample lakes extracted from the 30 m Landsat TM (Thematic Mapper ) images, the average overall accuracy of the lake water surface data set is 91.81% with average commission and omission error of 3.26% and 5.38%; the results also show strong linear (R2=0.9991) correlation with the global MODIS water mask dataset with overall accuracy of 86.30%; and the lake area difference between the Second National Lake Survey and this study is only 4.74%, respectively. This study provides reliable dataset for the lake change research of the plateau in the recent decade. The change trends and influencing factors analysis indicate that the total water surface area of lakes in the plateau showed overall increases, but only lakes with areas larger than 10 km2 had statistically significant increases. Furthermore, lakes with area larger than 100 km2 experienced an abrupt change in 2005. In addition, the annual average precipitation of Southern Tibet and Southern Qinghai experienced significant increasing and decreasing trends, and corresponding abrupt changes in 2004 and 2006, respectively. The annual average temperature of Southern Tibet and Qiangtang showed a significant increasing trend with an abrupt change in 2004. The major reason for the lake water surface variation in Eastern Qinghai and Qilian, Southern Qinghai and Southern Tibet is the changes of precipitation, and that for Qiangtang is the temperature variations.

Keywords: lake water surface variation, MODIS MOD09Q1, remote sensing, Tibetan Plateau

Procedia PDF Downloads 229
3277 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application

Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior

Abstract:

Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.

Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks

Procedia PDF Downloads 160