Search results for: diagnostic accuracy
3997 Effects of Listening to Pleasant Thai Classical Music on Increasing Working Memory in Elderly: An Electroencephalogram Study
Authors: Anchana Julsiri, Seree Chadcham
Abstract:
The present study determined the effects of listening to pleasant Thai classical music on increasing working memory in elderly. Thai classical music without lyrics that made participants feel fun and aroused was used in the experiment for 3.19-5.40 minutes. The accuracy scores of Counting Span Task (CST), upper alpha ERD%, and theta ERS% were used to assess working memory of participants both before and after listening to pleasant Thai classical music. The results showed that the accuracy scores of CST and upper alpha ERD% in the frontal area of participants after listening to Thai classical music were significantly higher than before listening to Thai classical music (p < .05). Theta ERS% in the fronto-parietal network of participants after listening to Thai classical music was significantly lower than before listening to Thai classical music (p < .05).Keywords: brain wave, elderly, pleasant Thai classical music, working memory
Procedia PDF Downloads 4573996 The Sequential Estimation of the Seismoacoustic Source Energy in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev, Dmitry V. Egorov
Abstract:
The practical efficient approach is suggested for estimation of the seismoacoustic sources energy in C-OTDR monitoring systems. This approach represents the sequential plan for confidence estimation both the seismoacoustic sources energy, as well the absorption coefficient of the soil. The sequential plan delivers the non-asymptotic guaranteed accuracy of obtained estimates in the form of non-asymptotic confidence regions with prescribed sizes. These confidence regions are valid for a finite sample size when the distributions of the observations are unknown. Thus, suggested estimates are non-asymptotic and nonparametric, and also these estimates guarantee the prescribed estimation accuracy in the form of the prior prescribed size of confidence regions, and prescribed confidence coefficient value.Keywords: nonparametric estimation, sequential confidence estimation, multichannel monitoring systems, C-OTDR-system, non-lineary regression
Procedia PDF Downloads 3543995 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest
Authors: Bharatendra Rai
Abstract:
Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error
Procedia PDF Downloads 3213994 Research on Reservoir Lithology Prediction Based on Residual Neural Network and Squeeze-and- Excitation Neural Network
Authors: Li Kewen, Su Zhaoxin, Wang Xingmou, Zhu Jian Bing
Abstract:
Conventional reservoir prediction methods ar not sufficient to explore the implicit relation between seismic attributes, and thus data utilization is low. In order to improve the predictive classification accuracy of reservoir lithology, this paper proposes a deep learning lithology prediction method based on ResNet (Residual Neural Network) and SENet (Squeeze-and-Excitation Neural Network). The neural network model is built and trained by using seismic attribute data and lithology data of Shengli oilfield, and the nonlinear mapping relationship between seismic attribute and lithology marker is established. The experimental results show that this method can significantly improve the classification effect of reservoir lithology, and the classification accuracy is close to 70%. This study can effectively predict the lithology of undrilled area and provide support for exploration and development.Keywords: convolutional neural network, lithology, prediction of reservoir, seismic attributes
Procedia PDF Downloads 1763993 A Chinese Nested Named Entity Recognition Model Based on Lexical Features
Abstract:
In the field of named entity recognition, most of the research has been conducted around simple entities. However, for nested named entities, which still contain entities within entities, it has been difficult to identify them accurately due to their boundary ambiguity. In this paper, a hierarchical recognition model is constructed based on the grammatical structure and semantic features of Chinese text for boundary calculation based on lexical features. The analysis is carried out at different levels in terms of granularity, semantics, and lexicality, respectively, avoiding repetitive work to reduce computational effort and using the semantic features of words to calculate the boundaries of entities to improve the accuracy of the recognition work. The results of the experiments carried out on web-based microblogging data show that the model achieves an accuracy of 86.33% and an F1 value of 89.27% in recognizing nested named entities, making up for the shortcomings of some previous recognition models and improving the efficiency of recognition of nested named entities.Keywords: coarse-grained, nested named entity, Chinese natural language processing, word embedding, T-SNE dimensionality reduction algorithm
Procedia PDF Downloads 1273992 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder
Procedia PDF Downloads 2883991 Relationship between Hepatokines and Insulin Resistance in Childhood Obesity
Authors: Mustafa Metin Donma, Orkide Donma
Abstract:
Childhood obesity is an important clinical problem because it may lead to chronic diseases during the adulthood period of the individual. Obesity is a metabolic disease associated with low-grade inflammation. The liver occurs at the center of metabolic pathways. Adropin, fibroblast growth factor-21 (FGF-21), and fetuin-A are hepatokines. Due to the immense participation of the liver in glucose metabolism, these liver-derived factors may be associated with insulin resistance (IR), which is a phenomenon discussed within the scope of obesity problems. The aim of this study is to determine the concentrations of adropin, FGF-21, and fetuin-A in childhood obesity, to point out possible differences between the obesity groups, and to investigate possible associations among these three hepatokines in obese and morbidly obese children. A total of one hundred and thirty-two children were included in the study. Two obese groups were constituted. The groups were matched in terms of mean ± SD values of ages. Body mass index values of obese and morbidly obese groups were 25.0 ± 3.5 kg/m² and 29.8 ± 5.7 kg/m², respectively. Anthropometric measurements including waist circumference, hip circumference, head circumference, and neck circumference were recorded. Informed consent forms were taken from the parents of the participants. The ethics committee of the institution approved the study protocol. Blood samples were obtained after overnight fasting. Routine biochemical tests, including glucose- and lipid-related parameters, were performed. Concentrations of the hepatokines (adropin, FGF-21, fetuin A) were determined by enzyme-linked immunosorbent assay. Insulin resistance indices such as homeostasis model assessment for IR (HOMA-IR), alanine transaminase-to aspartate transaminase ratio (ALT/AST), diagnostic obesity notation model assessment laboratory index, diagnostic obesity notation model assessment metabolic syndrome index as well as obesity indices such as diagnostic obesity notation model assessment-II index, and fat mass index were calculated using the previously derived formulas. Statistical evaluation of the study data as well as findings of the study was performed by SPSS for Windows. Statistical difference was accepted significant when p is smaller than 0.05. Statistically significant differences were found for insulin, triglyceride, high-density lipoprotein cholesterol levels of the groups. A significant increase was observed for FGF-21 concentrations in the morbidly obese group. Higher adropin and fetuin-A concentrations were observed in the same group in comparison with the values detected in the obese group (p > 0.05). There was no statistically significant difference between the ALT/AST values of the groups. In all of the remaining IR and obesity indices, significantly increased values were calculated for morbidly obese children. Significant correlations were detected between HOMA-IR and each of the hepatokines. The highest one was the association with fetuin-A (r=0.373, p=0.001). In conclusion, increased levels observed in adropin, FGF-21, and fetuin-A have shown that these hepatokines possess increasing potential going from obese to morbid obese state. Out of the correlations found with the IR index, the most affected hepatokine was fetuin-A, the parameter possibly used as the indicator of the advanced obesity stage.Keywords: adropin, fetuin A, fibroblast growth factor-21, insulin resistance, pediatric obesity
Procedia PDF Downloads 1753990 Neuroanatomical Specificity in Reporting & Diagnosing Neurolinguistic Disorders: A Functional & Ethical Primer
Authors: Ruairi J. McMillan
Abstract:
Introduction: This critical analysis aims to ascertain how well neuroanatomical aetiologies are communicated within 20 case reports of aphasia. Neuroanatomical visualisations based on dissected brain specimens were produced and combined with white matter tract and vascular taxonomies of function in order to address the most consistently underreported features found within the aphasic case study reports. Together, these approaches are intended to integrate aphasiological knowledge from the past 20 years with aphasiological diagnostics, and to act as prototypal resources for both researchers and clinical professionals. The medico-legal precedent for aphasia diagnostics under Canadian, US and UK case law and the neuroimaging/neurological diagnostics relative to the functional capacity of aphasic patients are discussed in relation to the major findings of the literary analysis, neuroimaging protocols in clinical use today, and the neuroanatomical aetiologies of different aphasias. Basic Methodology: Literature searches of relevant scientific databases (e.g, OVID medline) were carried out using search terms such as aphasia case study (year) & stroke induced aphasia case study. A series of 7 diagnostic reporting criteria were formulated, and the resulting case studies were scored / 7 alongside clinical stroke criteria. In order to focus on the diagnostic assessment of the patient’s condition, only the case report proper (not the discussion) was used to quantify results. Statistical testing established if specific reporting criteria were associated with higher overall scores and potentially inferable increases in quality of reporting. Statistical testing of whether criteria scores were associated with an unclear/adjusted diagnosis were also tested, as well as the probability of a given criterion deviating from an expected estimate. Major Findings: The quantitative analysis of neuroanatomically driven diagnostics in case studies of aphasia revealed particularly low scores in the connection of neuroanatomical functions to aphasiological assessment (10%), and in the inclusion of white matter tracts within neuroimaging or assessment diagnostics (30%). Case studies which included clinical mention of white matter tracts within the report itself were distributed among higher scoring cases, as were case studies which (as clinically indicated) related the affected vascular region to the brain parenchyma of the language network. Concluding Statement: These findings indicate that certain neuroanatomical functions are integrated less often within the patient report than others, despite a precedent for well-integrated neuroanatomical aphasiology also being found among the case studies sampled, and despite these functions being clinically essential in diagnostic neuroimaging and aphasiological assessment. Therefore, ultimately the integration and specificity of aetiological neuroanatomy may contribute positively to the capacity and autonomy of aphasic patients as well as their clinicians. The integration of a full aetiological neuroanatomy within the reporting of aphasias may improve patient outcomes and sustain autonomy in the event of medico-ethical investigation.Keywords: aphasia, language network, functional neuroanatomy, aphasiological diagnostics, medico-legal ethics
Procedia PDF Downloads 653989 Modelling of Geotechnical Data Using Geographic Information System and MATLAB for Eastern Ahmedabad City, Gujarat
Authors: Rahul Patel
Abstract:
Ahmedabad, a city located in western India, is experiencing rapid growth due to urbanization and industrialization. It is projected to become a metropolitan city in the near future, resulting in various construction activities. Soil testing is necessary before construction can commence, requiring construction companies and contractors to periodically conduct soil testing. The focus of this study is on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical (Geo)-database involves three steps: collecting borehole data from reputable sources, verifying the accuracy and redundancy of the data, and standardizing and organizing the geotechnical information for integration into the database. Once the database is complete, it is integrated with GIS, allowing users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. This GIS map enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This study highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers.Keywords: ArcGIS, borehole data, geographic information system, geo-database, interpolation, SPT N-value, soil classification, Φ-Value, bearing capacity
Procedia PDF Downloads 733988 Metrology-Inspired Methods to Assess the Biases of Artificial Intelligence Systems
Authors: Belkacem Laimouche
Abstract:
With the field of artificial intelligence (AI) experiencing exponential growth, fueled by technological advancements that pave the way for increasingly innovative and promising applications, there is an escalating need to develop rigorous methods for assessing their performance in pursuit of transparency and equity. This article proposes a metrology-inspired statistical framework for evaluating bias and explainability in AI systems. Drawing from the principles of metrology, we propose a pioneering approach, using a concrete example, to evaluate the accuracy and precision of AI models, as well as to quantify the sources of measurement uncertainty that can lead to bias in their predictions. Furthermore, we explore a statistical approach for evaluating the explainability of AI systems based on their ability to provide interpretable and transparent explanations of their predictions.Keywords: artificial intelligence, metrology, measurement uncertainty, prediction error, bias, machine learning algorithms, probabilistic models, interlaboratory comparison, data analysis, data reliability, measurement of bias impact on predictions, improvement of model accuracy and reliability
Procedia PDF Downloads 1033987 Synthesis and Thermoluminescence Investigations of Doped LiF Nanophosphor
Authors: Pooja Seth, Shruti Aggarwal
Abstract:
Thermoluminescence dosimetry (TLD) is one of the most effective methods for the assessment of dose during diagnostic radiology and radiotherapy applications. In these applications monitoring of absorbed dose is essential to prevent patient from undue exposure and to evaluate the risks that may arise due to exposure. LiF based thermoluminescence (TL) dosimeters are promising materials for the estimation, calibration and monitoring of dose due to their favourable dosimetric characteristics like tissue-equivalence, high sensitivity, energy independence and dose linearity. As the TL efficiency of a phosphor strongly depends on the preparation route, it is interesting to investigate the TL properties of LiF based phosphor in nanocrystalline form. LiF doped with magnesium (Mg), copper (Cu), sodium (Na) and silicon (Si) in nanocrystalline form has been prepared using chemical co-precipitation method. Cubical shape LiF nanostructures are formed. TL dosimetry properties have been investigated by exposing it to gamma rays. TL glow curve structure of nanocrystalline form consists of a single peak at 419 K as compared to the multiple peaks observed in microcrystalline form. A consistent glow curve structure with maximum TL intensity at annealing temperature of 573 K and linear dose response from 0.1 to 1000 Gy is observed which is advantageous for radiotherapy application. Good reusability, low fading (5 % over a month) and negligible residual signal (0.0019%) are observed. As per photoluminescence measurements, wide emission band at 360 nm - 550 nm is observed in an undoped LiF. However, an intense peak at 488 nm is observed in doped LiF nanophosphor. The phosphor also exhibits the intense optically stimulated luminescence. Nanocrystalline LiF: Mg, Cu, Na, Si phosphor prepared by co-precipitation method showed simple glow curve structure, linear dose response, reproducibility, negligible residual signal, good thermal stability and low fading. The LiF: Mg, Cu, Na, Si phosphor in nanocrystalline form has tremendous potential in diagnostic radiology, radiotherapy and high energy radiation application.Keywords: thermoluminescence, nanophosphor, optically stimulated luminescence, co-precipitation method
Procedia PDF Downloads 4033986 Tree Species Classification Using Effective Features of Polarimetric SAR and Hyperspectral Images
Authors: Milad Vahidi, Mahmod R. Sahebi, Mehrnoosh Omati, Reza Mohammadi
Abstract:
Forest management organizations need information to perform their work effectively. Remote sensing is an effective method to acquire information from the Earth. Two datasets of remote sensing images were used to classify forested regions. Firstly, all of extractable features from hyperspectral and PolSAR images were extracted. The optical features were spectral indexes related to the chemical, water contents, structural indexes, effective bands and absorption features. Also, PolSAR features were the original data, target decomposition components, and SAR discriminators features. Secondly, the particle swarm optimization (PSO) and the genetic algorithms (GA) were applied to select optimization features. Furthermore, the support vector machine (SVM) classifier was used to classify the image. The results showed that the combination of PSO and SVM had higher overall accuracy than the other cases. This combination provided overall accuracy about 90.56%. The effective features were the spectral index, the bands in shortwave infrared (SWIR) and the visible ranges and certain PolSAR features.Keywords: hyperspectral, PolSAR, feature selection, SVM
Procedia PDF Downloads 4163985 Variable Refrigerant Flow (VRF) Zonal Load Prediction Using a Transfer Learning-Based Framework
Authors: Junyu Chen, Peng Xu
Abstract:
In the context of global efforts to enhance building energy efficiency, accurate thermal load forecasting is crucial for both device sizing and predictive control. Variable Refrigerant Flow (VRF) systems are widely used in buildings around the world, yet VRF zonal load prediction has received limited attention. Due to differences between VRF zones in building-level prediction methods, zone-level load forecasting could significantly enhance accuracy. Given that modern VRF systems generate high-quality data, this paper introduces transfer learning to leverage this data and further improve prediction performance. This framework also addresses the challenge of predicting load for building zones with no historical data, offering greater accuracy and usability compared to pure white-box models. The study first establishes an initial variable set of VRF zonal building loads and generates a foundational white-box database using EnergyPlus. Key variables for VRF zonal loads are identified using methods including SRRC, PRCC, and Random Forest. XGBoost and LSTM are employed to generate pre-trained black-box models based on the white-box database. Finally, real-world data is incorporated into the pre-trained model using transfer learning to enhance its performance in operational buildings. In this paper, zone-level load prediction was integrated with transfer learning, and a framework was proposed to improve the accuracy and applicability of VRF zonal load prediction.Keywords: zonal load prediction, variable refrigerant flow (VRF) system, transfer learning, energyplus
Procedia PDF Downloads 263984 Facial Recognition of University Entrance Exam Candidates using FaceMatch Software in Iran
Authors: Mahshid Arabi
Abstract:
In recent years, remarkable advancements in the fields of artificial intelligence and machine learning have led to the development of facial recognition technologies. These technologies are now employed in a wide range of applications, including security, surveillance, healthcare, and education. In the field of education, the identification of university entrance exam candidates has been one of the fundamental challenges. Traditional methods such as using ID cards and handwritten signatures are not only inefficient and prone to fraud but also susceptible to errors. In this context, utilizing advanced technologies like facial recognition can be an effective and efficient solution to increase the accuracy and reliability of identity verification in entrance exams. This article examines the use of FaceMatch software for recognizing the faces of university entrance exam candidates in Iran. The main objective of this research is to evaluate the efficiency and accuracy of FaceMatch software in identifying university entrance exam candidates to prevent fraud and ensure the authenticity of individuals' identities. Additionally, this research investigates the advantages and challenges of using this technology in Iran's educational systems. This research was conducted using an experimental method and random sampling. In this study, 1000 university entrance exam candidates in Iran were selected as samples. The facial images of these candidates were processed and analyzed using FaceMatch software. The software's accuracy and efficiency were evaluated using various metrics, including accuracy rate, error rate, and processing time. The research results indicated that FaceMatch software could accurately identify candidates with a precision of 98.5%. The software's error rate was less than 1.5%, demonstrating its high efficiency in facial recognition. Additionally, the average processing time for each candidate's image was less than 2 seconds, indicating the software's high efficiency. Statistical evaluation of the results using precise statistical tests, including analysis of variance (ANOVA) and t-test, showed that the observed differences were significant, and the software's accuracy in identity verification is high. The findings of this research suggest that FaceMatch software can be effectively used as a tool for identifying university entrance exam candidates in Iran. This technology not only enhances security and prevents fraud but also simplifies and streamlines the exam administration process. However, challenges such as preserving candidates' privacy and the costs of implementation must also be considered. The use of facial recognition technology with FaceMatch software in Iran's educational systems can be an effective solution for preventing fraud and ensuring the authenticity of university entrance exam candidates' identities. Given the promising results of this research, it is recommended that this technology be more widely implemented and utilized in the country's educational systems.Keywords: facial recognition, FaceMatch software, Iran, university entrance exam
Procedia PDF Downloads 443983 The Effect of Particle Temperature on the Thickness of Thermally Sprayed Coatings
Authors: M. Jalali Azizpour, H.Mohammadi Majd
Abstract:
In this paper, the effect of WC-12Co particle Temperature in HVOF thermal spraying process on the coating thickness has been studied. The statistical results show that the spray distance and oxygen-to-fuel ratio are more effective factors on particle characterization and thickness of HVOF thermal spraying coatings. Spray Watch diagnostic system, scanning electron microscopy (SEM), X-ray diffraction and thickness measuring system were used for this purpose.Keywords: HVOF, temperature, thickness, velocity, WC-12Co
Procedia PDF Downloads 4003982 HPPDFIM-HD: Transaction Distortion and Connected Perturbation Approach for Hierarchical Privacy Preserving Distributed Frequent Itemset Mining over Horizontally-Partitioned Dataset
Authors: Fuad Ali Mohammed Al-Yarimi
Abstract:
Many algorithms have been proposed to provide privacy preserving in data mining. These protocols are based on two main approaches named as: the perturbation approach and the Cryptographic approach. The first one is based on perturbation of the valuable information while the second one uses cryptographic techniques. The perturbation approach is much more efficient with reduced accuracy while the cryptographic approach can provide solutions with perfect accuracy. However, the cryptographic approach is a much slower method and requires considerable computation and communication overhead. In this paper, a new scalable protocol is proposed which combines the advantages of the perturbation and distortion along with cryptographic approach to perform privacy preserving in distributed frequent itemset mining on horizontally distributed data. Both the privacy and performance characteristics of the proposed protocol are studied empirically.Keywords: anonymity data, data mining, distributed frequent itemset mining, gaussian perturbation, perturbation approach, privacy preserving data mining
Procedia PDF Downloads 5033981 Recent Developments in the Application of Deep Learning to Stock Market Prediction
Authors: Shraddha Jain Sharma, Ratnalata Gupta
Abstract:
Predicting stock movements in the financial market is both difficult and rewarding. Analysts and academics are increasingly using advanced approaches such as machine learning techniques to anticipate stock price patterns, thanks to the expanding capacity of computing and the recent advent of graphics processing units and tensor processing units. Stock market prediction is a type of time series prediction that is incredibly difficult to do since stock prices are influenced by a variety of financial, socioeconomic, and political factors. Furthermore, even minor mistakes in stock market price forecasts can result in significant losses for companies that employ the findings of stock market price prediction for financial analysis and investment. Soft computing techniques are increasingly being employed for stock market prediction due to their better accuracy than traditional statistical methodologies. The proposed research looks at the need for soft computing techniques in stock market prediction, the numerous soft computing approaches that are important to the field, past work in the area with their prominent features, and the significant problems or issue domain that the area involves. For constructing a predictive model, the major focus is on neural networks and fuzzy logic. The stock market is extremely unpredictable, and it is unquestionably tough to correctly predict based on certain characteristics. This study provides a complete overview of the numerous strategies investigated for high accuracy prediction, with a focus on the most important characteristics.Keywords: stock market prediction, artificial intelligence, artificial neural networks, fuzzy logic, accuracy, deep learning, machine learning, stock price, trading volume
Procedia PDF Downloads 883980 Ultra-High Precision Diamond Turning of Infrared Lenses
Authors: Khaled Abou-El-Hossein
Abstract:
The presentation will address the features of two IR convex lenses that have been manufactured using an ultra-high precision machining centre based on single-point diamond turning. The lenses are made from silicon and germanium with a radius of curvature of 500 mm. Because of the brittle nature of silicon and germanium, machining parameters were selected in such a way that ductile regime was achieved. The cutting speed was 800 rpm while the feed rate and depth cut were 20 mm/min and 20 um, respectively. Although both materials comprise a mono-crystalline microstructure and are quite similar in terms of optical properties, machining of silicon was accompanied with more difficulties in terms of form accuracy compared to germanium machining. The P-V error of the silicon profile was 0.222 um while it was only 0.055 um for the germanium lens. This could be attributed to the accelerated wear that takes place on the tool edge when turning mono-crystalline silicon. Currently, we are using other ranges of the machining parameters in order to determine their optimal range that could yield satisfactory performance in terms of form accuracy when fabricating silicon lenses.Keywords: diamond turning, optical surfaces, precision machining, surface roughness
Procedia PDF Downloads 3153979 Achieving Design-Stage Elemental Cost Planning Accuracy: Case Study of New Zealand
Authors: Johnson Adafin, James O. B. Rotimi, Suzanne Wilkinson, Abimbola O. Windapo
Abstract:
An aspect of client expenditure management that requires attention is the level of accuracy achievable in design-stage elemental cost planning. This has been a major concern for construction clients and practitioners in New Zealand (NZ). Pre-tender estimating inaccuracies are significantly influenced by the level of risk information available to estimators. Proper cost planning activities should ensure the production of a project’s likely construction costs (initial and final), and subsequent cost control activities should prevent unpleasant consequences of cost overruns, disputes and project abandonment. If risks were properly identified and priced at the design stage, observed variance between design-stage elemental cost plans (ECPs) and final tender sums (FTS) (initial contract sums) could be reduced. This study investigates the variations between design-stage ECPs and FTS of construction projects, with a view to identifying risk factors that are responsible for the observed variance. Data were sourced through interviews, and risk factors were identified by using thematic analysis. Access was obtained to project files from the records of study participants (consultant quantity surveyors), and document analysis was employed in complementing the responses from the interviews. Study findings revealed the discrepancies between ECPs and FTS in the region of -14% and +16%. It is opined in this study that the identified risk factors were responsible for the variability observed. The values obtained from the analysis would enable greater accuracy in the forecast of FTS by Quantity Surveyors. Further, whilst inherent risks in construction project developments are observed globally, these findings have important ramifications for construction projects by expanding existing knowledge on what is needed for reasonable budgetary performance and successful delivery of construction projects. The findings contribute significantly to the study by providing quantitative confirmation to justify the theoretical conclusions generated in the literature from around the world. This therefore adds to and consolidates existing knowledge.Keywords: accuracy, design-stage, elemental cost plan, final tender sum
Procedia PDF Downloads 2683978 The Development of the Website Learning the Local Wisdom in Phra Nakhon Si Ayutthaya Province
Authors: Bunthida Chunngam, Thanyanan Worasesthaphong
Abstract:
This research had objective to develop of the website learning the local wisdom in Phra Nakhon Si Ayutthaya province and studied satisfaction of system user. This research sample was multistage sample for 100 questionnaires, analyzed data to calculated reliability value with Cronbach’s alpha coefficient method α=0.82. This system had 3 functions which were system using, system feather evaluation and system accuracy evaluation which the statistics used for data analysis was descriptive statistics to explain sample feature so these statistics were frequency, percentage, mean and standard deviation. This data analysis result found that the system using performance quality had good level satisfaction (4.44 mean), system feather function analysis had good level satisfaction (4.11 mean) and system accuracy had good level satisfaction (3.74 mean).Keywords: website, learning, local wisdom, Phra Nakhon Si Ayutthaya province
Procedia PDF Downloads 1193977 Performance Comparison of ADTree and Naive Bayes Algorithms for Spam Filtering
Authors: Thanh Nguyen, Andrei Doncescu, Pierre Siegel
Abstract:
Classification is an important data mining technique and could be used as data filtering in artificial intelligence. The broad application of classification for all kind of data leads to be used in nearly every field of our modern life. Classification helps us to put together different items according to the feature items decided as interesting and useful. In this paper, we compare two classification methods Naïve Bayes and ADTree use to detect spam e-mail. This choice is motivated by the fact that Naive Bayes algorithm is based on probability calculus while ADTree algorithm is based on decision tree. The parameter settings of the above classifiers use the maximization of true positive rate and minimization of false positive rate. The experiment results present classification accuracy and cost analysis in view of optimal classifier choice for Spam Detection. It is point out the number of attributes to obtain a tradeoff between number of them and the classification accuracy.Keywords: classification, data mining, spam filtering, naive bayes, decision tree
Procedia PDF Downloads 4083976 A Method of Effective Planning and Control of Industrial Facility Energy Consumption
Authors: Aleksandra Aleksandrovna Filimonova, Lev Sergeevich Kazarinov, Tatyana Aleksandrovna Barbasova
Abstract:
A method of effective planning and control of industrial facility energy consumption is offered. The method allows to optimally arrange the management and full control of complex production facilities in accordance with the criteria of minimal technical and economic losses at the forecasting control. The method is based on the optimal construction of the power efficiency characteristics with the prescribed accuracy. The problem of optimal designing of the forecasting model is solved on the basis of three criteria: maximizing the weighted sum of the points of forecasting with the prescribed accuracy; the solving of the problem by the standard principles at the incomplete statistic data on the basis of minimization of the regularized function; minimizing the technical and economic losses due to the forecasting errors.Keywords: energy consumption, energy efficiency, energy management system, forecasting model, power efficiency characteristics
Procedia PDF Downloads 3903975 Rapid Detection of the Etiology of Infection as Bacterial or Viral Using Infrared Spectroscopy of White Blood Cells
Authors: Uraib Sharaha, Guy Beck, Joseph Kapelushnik, Adam H. Agbaria, Itshak Lapidot, Shaul Mordechai, Ahmad Salman, Mahmoud Huleihel
Abstract:
Infectious diseases cause a significant burden on the public health and the economic stability of societies all over the world for several centuries. A reliable detection of the causative agent of infection is not possible based on clinical features, since some of these infections have similar symptoms, including fever, sneezing, inflammation, vomiting, diarrhea, and fatigue. Moreover, physicians usually encounter difficulties in distinguishing between viral and bacterial infections based on symptoms. Therefore, there is an ongoing need for sensitive, specific, and rapid methods for identification of the etiology of the infection. This intricate issue perplex doctors and researchers since it has serious repercussions. In this study, we evaluated the potential of the mid-infrared spectroscopic method for rapid and reliable identification of bacterial and viral infections based on simple peripheral blood samples. Fourier transform infrared (FTIR) spectroscopy is considered a successful diagnostic method in the biological and medical fields. Many studies confirmed the great potential of the combination of FTIR spectroscopy and machine learning as a powerful diagnostic tool in medicine since it is a very sensitive method, which can detect and monitor the molecular and biochemical changes in biological samples. We believed that this method would play a major role in improving the health situation, raising the level of health in the community, and reducing the economic burdens in the health sector resulting from the indiscriminate use of antibiotics. We collected peripheral blood samples from young 364 patients, of which 93 were controls, 126 had bacterial infections, and 145 had viral infections, with ages lower than18 years old, limited to those who were diagnosed with fever-producing illness. Our preliminary results showed that it is possible to determine the infectious agent with high success rates of 82% for sensitivity and 80% for specificity, based on the WBC data.Keywords: infectious diseases, (FTIR) spectroscopy, viral infections, bacterial infections.
Procedia PDF Downloads 1383974 A Tagging Algorithm in Augmented Reality for Mobile Device Screens
Authors: Doga Erisik, Ahmet Karaman, Gulfem Alptekin, Ozlem Durmaz Incel
Abstract:
Augmented reality (AR) is a type of virtual reality aiming to duplicate real world’s environment on a computer’s video feed. The mobile application, which is built for this project (called SARAS), enables annotating real world point of interests (POIs) that are located near mobile user. In this paper, we aim at introducing a robust and simple algorithm for placing labels in an augmented reality system. The system places labels of the POIs on the mobile device screen whose GPS coordinates are given. The proposed algorithm is compared to an existing one in terms of energy consumption and accuracy. The results show that the proposed algorithm gives better results in energy consumption and accuracy while standing still, and acceptably accurate results when driving. The technique provides benefits to AR browsers with its open access algorithm. Going forward, the algorithm will be improved to more rapidly react to position changes while driving.Keywords: accurate tagging algorithm, augmented reality, localization, location-based AR
Procedia PDF Downloads 3723973 Performance Comparison of Deep Convolutional Neural Networks for Binary Classification of Fine-Grained Leaf Images
Authors: Kamal KC, Zhendong Yin, Dasen Li, Zhilu Wu
Abstract:
Intra-plant disease classification based on leaf images is a challenging computer vision task due to similarities in texture, color, and shape of leaves with a slight variation of leaf spot; and external environmental changes such as lighting and background noises. Deep convolutional neural network (DCNN) has proven to be an effective tool for binary classification. In this paper, two methods for binary classification of diseased plant leaves using DCNN are presented; model created from scratch and transfer learning. Our main contribution is a thorough evaluation of 4 networks created from scratch and transfer learning of 5 pre-trained models. Training and testing of these models were performed on a plant leaf images dataset belonging to 16 distinct classes, containing a total of 22,265 images from 8 different plants, consisting of a pair of healthy and diseased leaves. We introduce a deep CNN model, Optimized MobileNet. This model with depthwise separable CNN as a building block attained an average test accuracy of 99.77%. We also present a fine-tuning method by introducing the concept of a convolutional block, which is a collection of different deep neural layers. Fine-tuned models proved to be efficient in terms of accuracy and computational cost. Fine-tuned MobileNet achieved an average test accuracy of 99.89% on 8 pairs of [healthy, diseased] leaf ImageSet.Keywords: deep convolution neural network, depthwise separable convolution, fine-grained classification, MobileNet, plant disease, transfer learning
Procedia PDF Downloads 1853972 A Reliable Multi-Type Vehicle Classification System
Authors: Ghada S. Moussa
Abstract:
Vehicle classification is an important task in traffic surveillance and intelligent transportation systems. Classification of vehicle images is facing several problems such as: high intra-class vehicle variations, occlusion, shadow, illumination. These problems and others must be considered to develop a reliable vehicle classification system. In this study, a reliable multi-type vehicle classification system based on Bag-of-Words (BoW) paradigm is developed. Our proposed system used and compared four well-known classifiers; Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), k-Nearest Neighbour (KNN), and Decision Tree to classify vehicles into four categories: motorcycles, small, medium and large. Experiments on a large dataset show that our approach is efficient and reliable in classifying vehicles with accuracy of 95.7%. The SVM outperforms other classification algorithms in terms of both accuracy and robustness alongside considerable reduction in execution time. The innovativeness of developed system is it can serve as a framework for many vehicle classification systems.Keywords: vehicle classification, bag-of-words technique, SVM classifier, LDA classifier, KNN classifier, decision tree classifier, SIFT algorithm
Procedia PDF Downloads 3553971 Comparison of Solar Radiation Models
Authors: O. Behar, A. Khellaf, K. Mohammedi, S. Ait Kaci
Abstract:
Up to now, most validation studies have been based on the MBE and RMSE, and therefore, focused only on long and short terms performance to test and classify solar radiation models. This traditional analysis does not take into account the quality of modeling and linearity. In our analysis we have tested 22 solar radiation models that are capable to provide instantaneous direct and global radiation at any given location Worldwide. We introduce a new indicator, which we named Global Accuracy Indicator (GAI) to examine the linear relationship between the measured and predicted values and the quality of modeling in addition to long and short terms performance. Note that the quality of model has been represented by the T-Statistical test, the model linearity has been given by the correlation coefficient and the long and short term performance have been respectively known by the MBE and RMSE. An important founding of this research is that the use GAI allows avoiding default validation when using traditional methodology that might results in erroneous prediction of solar power conversion systems performances.Keywords: solar radiation model, parametric model, performance analysis, Global Accuracy Indicator (GAI)
Procedia PDF Downloads 3483970 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques
Authors: Chandu Rathnayake, Isuri Anuradha
Abstract:
Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.Keywords: CNN, random forest, decision tree, machine learning, deep learning
Procedia PDF Downloads 723969 Analysis of Enhanced Built-up and Bare Land Index in the Urban Area of Yangon, Myanmar
Authors: Su Nandar Tin, Wutjanun Muttitanon
Abstract:
The availability of free global and historical satellite imagery provides a valuable opportunity for mapping and monitoring the year by year for the built-up area, constantly and effectively. Land distribution guidelines and identification of changes are important in preparing and reviewing changes in the ground overview data. This study utilizes Landsat images for thirty years of information to acquire significant, and land spread data that are extremely valuable for urban arranging. This paper is mainly introducing to focus the basic of extracting built-up area for the city development area from the satellite images of LANDSAT 5,7,8 and Sentinel 2A from USGS in every five years. The purpose analyses the changing of the urban built-up area according to the year by year and to get the accuracy of mapping built-up and bare land areas in studying the trend of urban built-up changes the periods from 1990 to 2020. The GIS tools such as raster calculator and built-up area modelling are using in this study and then calculating the indices, which include enhanced built-up and bareness index (EBBI), Normalized difference Built-up index (NDBI), Urban index (UI), Built-up index (BUI) and Normalized difference bareness index (NDBAI) are used to get the high accuracy urban built-up area. Therefore, this study will point out a variable approach to automatically mapping typical enhanced built-up and bare land changes (EBBI) with simple indices and according to the outputs of indexes. Therefore, the percentage of the outputs of enhanced built-up and bareness index (EBBI) of the sentinel-2A can be realized with 48.4% of accuracy than the other index of Landsat images which are 15.6% in 1990 where there is increasing urban expansion area from 43.6% in 1990 to 92.5% in 2020 on the study area for last thirty years.Keywords: built-up area, EBBI, NDBI, NDBAI, urban index
Procedia PDF Downloads 1693968 A Lagrangian Hamiltonian Computational Method for Hyper-Elastic Structural Dynamics
Authors: Hosein Falahaty, Hitoshi Gotoh, Abbas Khayyer
Abstract:
Performance of a Hamiltonian based particle method in simulation of nonlinear structural dynamics is subjected to investigation in terms of stability and accuracy. The governing equation of motion is derived based on Hamilton's principle of least action, while the deformation gradient is obtained according to Weighted Least Square method. The hyper-elasticity models of Saint Venant-Kirchhoff and a compressible version similar to Mooney- Rivlin are engaged for the calculation of second Piola-Kirchhoff stress tensor, respectively. Stability along with accuracy of numerical model is verified by reproducing critical stress fields in static and dynamic responses. As the results, although performance of Hamiltonian based model is evaluated as being acceptable in dealing with intense extensional stress fields, however kinds of instabilities reveal in the case of violent collision which can be most likely attributed to zero energy singular modes.Keywords: Hamilton's principle of least action, particle-based method, hyper-elasticity, analysis of stability
Procedia PDF Downloads 340