Search results for: user classification accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7128

Search results for: user classification accuracy

5268 Close-Range Remote Sensing Techniques for Analyzing Rock Discontinuity Properties

Authors: Sina Fatolahzadeh, Sergio A. Sepúlveda

Abstract:

This paper presents advanced developments in close-range, terrestrial remote sensing techniques to enhance the characterization of rock masses. The study integrates two state-of-the-art laser-scanning technologies, the HandySCAN and GeoSLAM laser scanners, to extract high-resolution geospatial data for rock mass analysis. These instruments offer high accuracy, precision, low acquisition time, and high efficiency in capturing intricate geological features in small to medium size outcrops and slope cuts. Using the HandySCAN and GeoSLAM laser scanners facilitates real-time, three-dimensional mapping of rock surfaces, enabling comprehensive assessments of rock mass characteristics. The collected data provide valuable insights into structural complexities, surface roughness, and discontinuity patterns, which are essential for geological and geotechnical analyses. The synergy of these advanced remote sensing technologies contributes to a more precise and straightforward understanding of rock mass behavior. In this case, the main parameters of RQD, joint spacing, persistence, aperture, roughness, infill, weathering, water condition, and joint orientation in a slope cut along the Sea-to-Sky Highway, BC, were remotely analyzed to calculate and evaluate the Rock Mass Rating (RMR) and Geological Strength Index (GSI) classification systems. Automatic and manual analyses of the acquired data are then compared with field measurements. The results show the usefulness of the proposed remote sensing methods and their appropriate conformity with the actual field data.

Keywords: remote sensing, rock mechanics, rock engineering, slope stability, discontinuity properties

Procedia PDF Downloads 62
5267 Usability Evaluation of Rice Doctor as a Diagnostic Tool for Agricultural Extension Workers in Selected Areas in the Philippines

Authors: Jerome Cayton Barradas, Rowely Parico, Lauro Atienza, Poornima Shankar

Abstract:

The effective agricultural extension is essential in facilitating improvements in various agricultural areas. One way of doing this is through Information and communication technologies (ICTs) like Rice Doctor (RD), an app-based diagnostic tool that provides accurate and timely diagnosis and management recommendations for more than 80 crop problems. This study aims to evaluate the RD usability by determining the effectiveness, efficiency, and user satisfaction of RD in making an accurate and timely diagnosis. It also aims to identify other factors that affect RD usability. This will be done by comparing RD with two other diagnostic methods: visual identification-based diagnosis and reference-guided diagnosis. The study was implemented in three rice-producing areas and has involved 96 extension workers. Respondents accomplished a self-administered survey and participated in group discussions. Data collected was then subjected to qualitative and quantitative analysis. Most of the respondents were satisfied with RD and believed that references are needed in assuring the accuracy of diagnosis. The majority found it efficient and easy to use. Some found it confusing and complicated, but this is because of their unfamiliarity with RD. Most users were also able to achieve accurate diagnosis proving effectiveness. Lastly, although users have reservations, they are satisfied and open to using RD. The study also found out the importance of visual identification skills in using RD and the need for capacity development and improvement of access to RD devices. From these results, the following are recommended to improve RD usability: review and upgrade diagnostic keys, expand further RD content, initiate capacity development for AEWs, and prepare and implement an RD communication plan.

Keywords: agricultural extension, crop protection, information and communication technologies, rice doctor

Procedia PDF Downloads 249
5266 Recommender Systems Using Ensemble Techniques

Authors: Yeonjeong Lee, Kyoung-jae Kim, Youngtae Kim

Abstract:

This study proposes a novel recommender system that uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user’s preference. The proposed model consists of two steps. In the first step, this study uses logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. Then, this study combines the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. In the second step, this study uses the market basket analysis to extract association rules for co-purchased products. Finally, the system selects customers who have high likelihood to purchase products in each product group and recommends proper products from same or different product groups to them through above two steps. We test the usability of the proposed system by using prototype and real-world transaction and profile data. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The results also show that the proposed system may be useful in real-world online shopping store.

Keywords: product recommender system, ensemble technique, association rules, decision tree, artificial neural networks

Procedia PDF Downloads 293
5265 HPPDFIM-HD: Transaction Distortion and Connected Perturbation Approach for Hierarchical Privacy Preserving Distributed Frequent Itemset Mining over Horizontally-Partitioned Dataset

Authors: Fuad Ali Mohammed Al-Yarimi

Abstract:

Many algorithms have been proposed to provide privacy preserving in data mining. These protocols are based on two main approaches named as: the perturbation approach and the Cryptographic approach. The first one is based on perturbation of the valuable information while the second one uses cryptographic techniques. The perturbation approach is much more efficient with reduced accuracy while the cryptographic approach can provide solutions with perfect accuracy. However, the cryptographic approach is a much slower method and requires considerable computation and communication overhead. In this paper, a new scalable protocol is proposed which combines the advantages of the perturbation and distortion along with cryptographic approach to perform privacy preserving in distributed frequent itemset mining on horizontally distributed data. Both the privacy and performance characteristics of the proposed protocol are studied empirically.

Keywords: anonymity data, data mining, distributed frequent itemset mining, gaussian perturbation, perturbation approach, privacy preserving data mining

Procedia PDF Downloads 502
5264 Recent Developments in the Application of Deep Learning to Stock Market Prediction

Authors: Shraddha Jain Sharma, Ratnalata Gupta

Abstract:

Predicting stock movements in the financial market is both difficult and rewarding. Analysts and academics are increasingly using advanced approaches such as machine learning techniques to anticipate stock price patterns, thanks to the expanding capacity of computing and the recent advent of graphics processing units and tensor processing units. Stock market prediction is a type of time series prediction that is incredibly difficult to do since stock prices are influenced by a variety of financial, socioeconomic, and political factors. Furthermore, even minor mistakes in stock market price forecasts can result in significant losses for companies that employ the findings of stock market price prediction for financial analysis and investment. Soft computing techniques are increasingly being employed for stock market prediction due to their better accuracy than traditional statistical methodologies. The proposed research looks at the need for soft computing techniques in stock market prediction, the numerous soft computing approaches that are important to the field, past work in the area with their prominent features, and the significant problems or issue domain that the area involves. For constructing a predictive model, the major focus is on neural networks and fuzzy logic. The stock market is extremely unpredictable, and it is unquestionably tough to correctly predict based on certain characteristics. This study provides a complete overview of the numerous strategies investigated for high accuracy prediction, with a focus on the most important characteristics.

Keywords: stock market prediction, artificial intelligence, artificial neural networks, fuzzy logic, accuracy, deep learning, machine learning, stock price, trading volume

Procedia PDF Downloads 87
5263 Ultra-High Precision Diamond Turning of Infrared Lenses

Authors: Khaled Abou-El-Hossein

Abstract:

The presentation will address the features of two IR convex lenses that have been manufactured using an ultra-high precision machining centre based on single-point diamond turning. The lenses are made from silicon and germanium with a radius of curvature of 500 mm. Because of the brittle nature of silicon and germanium, machining parameters were selected in such a way that ductile regime was achieved. The cutting speed was 800 rpm while the feed rate and depth cut were 20 mm/min and 20 um, respectively. Although both materials comprise a mono-crystalline microstructure and are quite similar in terms of optical properties, machining of silicon was accompanied with more difficulties in terms of form accuracy compared to germanium machining. The P-V error of the silicon profile was 0.222 um while it was only 0.055 um for the germanium lens. This could be attributed to the accelerated wear that takes place on the tool edge when turning mono-crystalline silicon. Currently, we are using other ranges of the machining parameters in order to determine their optimal range that could yield satisfactory performance in terms of form accuracy when fabricating silicon lenses.

Keywords: diamond turning, optical surfaces, precision machining, surface roughness

Procedia PDF Downloads 314
5262 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification

Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine

Abstract:

Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.

Keywords: convolution, feature extraction, image analysis, validation, precision agriculture

Procedia PDF Downloads 312
5261 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea

Authors: Kyomin Lee, Joohee Kim, Sangho Kang

Abstract:

The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.

Keywords: characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste

Procedia PDF Downloads 206
5260 Achieving Design-Stage Elemental Cost Planning Accuracy: Case Study of New Zealand

Authors: Johnson Adafin, James O. B. Rotimi, Suzanne Wilkinson, Abimbola O. Windapo

Abstract:

An aspect of client expenditure management that requires attention is the level of accuracy achievable in design-stage elemental cost planning. This has been a major concern for construction clients and practitioners in New Zealand (NZ). Pre-tender estimating inaccuracies are significantly influenced by the level of risk information available to estimators. Proper cost planning activities should ensure the production of a project’s likely construction costs (initial and final), and subsequent cost control activities should prevent unpleasant consequences of cost overruns, disputes and project abandonment. If risks were properly identified and priced at the design stage, observed variance between design-stage elemental cost plans (ECPs) and final tender sums (FTS) (initial contract sums) could be reduced. This study investigates the variations between design-stage ECPs and FTS of construction projects, with a view to identifying risk factors that are responsible for the observed variance. Data were sourced through interviews, and risk factors were identified by using thematic analysis. Access was obtained to project files from the records of study participants (consultant quantity surveyors), and document analysis was employed in complementing the responses from the interviews. Study findings revealed the discrepancies between ECPs and FTS in the region of -14% and +16%. It is opined in this study that the identified risk factors were responsible for the variability observed. The values obtained from the analysis would enable greater accuracy in the forecast of FTS by Quantity Surveyors. Further, whilst inherent risks in construction project developments are observed globally, these findings have important ramifications for construction projects by expanding existing knowledge on what is needed for reasonable budgetary performance and successful delivery of construction projects. The findings contribute significantly to the study by providing quantitative confirmation to justify the theoretical conclusions generated in the literature from around the world. This therefore adds to and consolidates existing knowledge.

Keywords: accuracy, design-stage, elemental cost plan, final tender sum

Procedia PDF Downloads 265
5259 Design an Algorithm for Software Development in CBSE Envrionment Using Feed Forward Neural Network

Authors: Amit Verma, Pardeep Kaur

Abstract:

In software development organizations, Component based Software engineering (CBSE) is emerging paradigm for software development and gained wide acceptance as it often results in increase quality of software product within development time and budget. In component reusability, main challenges are the right component identification from large repositories at right time. The major objective of this work is to provide efficient algorithm for storage and effective retrieval of components using neural network and parameters based on user choice through clustering. This research paper aims to propose an algorithm that provides error free and automatic process (for retrieval of the components) while reuse of the component. In this algorithm, keywords (or components) are extracted from software document, after by applying k mean clustering algorithm. Then weights assigned to those keywords based on their frequency and after assigning weights, ANN predicts whether correct weight is assigned to keywords (or components) or not, otherwise it back propagates in to initial step (re-assign the weights). In last, store those all keywords into repositories for effective retrieval. Proposed algorithm is very effective in the error correction and detection with user base choice while choice of component for reusability for efficient retrieval is there.

Keywords: component based development, clustering, back propagation algorithm, keyword based retrieval

Procedia PDF Downloads 376
5258 Parameter Measurement Systems to Evaluate Performance of Archers

Authors: Muhammad Zikril Hakim Md. Azizi, Norhafizan Ahmad, Raja Ariffin Raja Ghazilla

Abstract:

Postural stability, attention level of the archer and particularly the vibrations of the bow itself plays a prominent role in determining the athletes performance. Many techniques and systems had been developing to monitor the parameters of the archers during training. In Malaysia, archery coaches tend to use non-scientific ways that they are familiar with, to evaluate archer performance. An approach that provides more affordable yet accurate systems to the masses and relatively easy system deployment procedure need to be proposed. Hence, this project will address to fulfil the needs. Three area of the archer parameter were included for data monitoring sensors. Attention level can be measured using EEG sensor, centre of mass linked to the postural stability can be measured by foot pressure sensor, and the bow vibrations in three axis will be relayed by the vibrations sensors placed directly on the bow using wireless sensors. Arduino based microcontroller used to relay all the data back to the interfacing systems. Interface systems will be using Python language and C++ framework for user interface and hardware interfacing systems. All sensor data can be observed in real time using the in-house applications, and each sessions can be saved to common files so that coach and the team can have a further discussion and comparisons.

Keywords: archery, graphical user interface, microcontroller, wireless sensor, monitoring system

Procedia PDF Downloads 298
5257 Accurate Position Electromagnetic Sensor Using Data Acquisition System

Authors: Z. Ezzouine, A. Nakheli

Abstract:

This paper presents a high position electromagnetic sensor system (HPESS) that is applicable for moving object detection. The authors have developed a high-performance position sensor prototype dedicated to students’ laboratory. The challenge was to obtain a highly accurate and real-time sensor that is able to calculate position, length or displacement. An electromagnetic solution based on a two coil induction principal was adopted. The HPESS converts mechanical motion to electric energy with direct contact. The output signal can then be fed to an electronic circuit. The voltage output change from the sensor is captured by data acquisition system using LabVIEW software. The displacement of the moving object is determined. The measured data are transmitted to a PC in real-time via a DAQ (NI USB -6281). This paper also describes the data acquisition analysis and the conditioning card developed specially for sensor signal monitoring. The data is then recorded and viewed using a user interface written using National Instrument LabVIEW software. On-line displays of time and voltage of the sensor signal provide a user-friendly data acquisition interface. The sensor provides an uncomplicated, accurate, reliable, inexpensive transducer for highly sophisticated control systems.

Keywords: electromagnetic sensor, accurately, data acquisition, position measurement

Procedia PDF Downloads 281
5256 Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features

Authors: Kyi Pyar Zaw, Zin Mar Kyu

Abstract:

Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.

Keywords: chain code frequency, character recognition, feature extraction, features matching, segmentation

Procedia PDF Downloads 314
5255 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 48
5254 Recent Developments in Artificial Intelligence and Information Communications Technology

Authors: Dolapo Adeyemo

Abstract:

Technology can be designed specifically for geriatrics and persons with disabilities or ICT accessibility solutions. Both solutions stand to benefit from advances in Artificial intelligence, which are computer systems that perform tasks that require human intelligence. Tasks such as decision making, visual perception, speech recognition, and even language translation are useful in both situation and will provide significant benefits to people with temporarily or permanent disabilities. This research’s goal is to review innovations focused on the use of artificial intelligence that bridges the accessibility gap in technology from a user-centered perspective. A mixed method approach that utilized a comprehensive review of academic literature on the subject combined with semi structure interviews of users, developers, and technology product owners. The internet of things and artificial intelligence technology is creating new opportunities in the assistive technology space and proving accessibility to existing technology. Device now more adaptable to the needs of the user by learning the behavior of users as they interact with the internet. Accessibility to devices have witnessed significant enhancements that continue to benefit people with disabilities. Examples of other advances identified are prosthetic limbs like robotic arms supported by artificial intelligence, route planning software for the visually impaired, and decision support tools for people with disabilities and even clinicians that provide care.

Keywords: ICT, IOT, accessibility solutions, universal design

Procedia PDF Downloads 85
5253 Gender Estimation by Means of Quantitative Measurements of Foramen Magnum: An Analysis of CT Head Images

Authors: Thilini Hathurusinghe, Uthpalie Siriwardhana, W. M. Ediri Arachchi, Ranga Thudugala, Indeewari Herath, Gayani Senanayake

Abstract:

The foramen magnum is more prone to protect than other skeletal remains during high impact and severe disruptive injuries. Therefore, it is worthwhile to explore whether these measurements can be used to determine the human gender which is vital in forensic and anthropological studies. The idea was to find out the ability to use quantitative measurements of foramen magnum as an anatomical indicator for human gender estimation and to evaluate the gender-dependent variations of foramen magnum using quantitative measurements. Randomly selected 113 subjects who underwent CT head scans at Sri Jayawardhanapura General Hospital of Sri Lanka within a period of six months, were included in the study. The sample contained 58 males (48.76 ± 14.7 years old) and 55 females (47.04 ±15.9 years old). Maximum length of the foramen magnum (LFM), maximum width of the foramen magnum (WFM), minimum distance between occipital condyles (MnD) and maximum interior distance between occipital condyles (MxID) were measured. Further, AreaT and AreaR were also calculated. The gender was estimated using binomial logistic regression. The mean values of all explanatory variables (LFM, WFM, MnD, MxID, AreaT, and AreaR) were greater among male than female. All explanatory variables except MnD (p=0.669) were statistically significant (p < 0.05). Significant bivariate correlations were demonstrated by AreaT and AreaR with the explanatory variables. The results evidenced that WFM and MxID were the best measurements in predicting gender according to binomial logistic regression. The estimated model was: log (p/1-p) =10.391-0.136×MxID-0.231×WFM, where p is the probability of being a female. The classification accuracy given by the above model was 65.5%. The quantitative measurements of foramen magnum can be used as a reliable anatomical marker for human gender estimation in the Sri Lankan context.

Keywords: foramen magnum, forensic and anthropological studies, gender estimation, logistic regression

Procedia PDF Downloads 148
5252 A Method of Effective Planning and Control of Industrial Facility Energy Consumption

Authors: Aleksandra Aleksandrovna Filimonova, Lev Sergeevich Kazarinov, Tatyana Aleksandrovna Barbasova

Abstract:

A method of effective planning and control of industrial facility energy consumption is offered. The method allows to optimally arrange the management and full control of complex production facilities in accordance with the criteria of minimal technical and economic losses at the forecasting control. The method is based on the optimal construction of the power efficiency characteristics with the prescribed accuracy. The problem of optimal designing of the forecasting model is solved on the basis of three criteria: maximizing the weighted sum of the points of forecasting with the prescribed accuracy; the solving of the problem by the standard principles at the incomplete statistic data on the basis of minimization of the regularized function; minimizing the technical and economic losses due to the forecasting errors.

Keywords: energy consumption, energy efficiency, energy management system, forecasting model, power efficiency characteristics

Procedia PDF Downloads 388
5251 A Clustering-Based Approach for Weblog Data Cleaning

Authors: Amine Ganibardi, Cherif Arab Ali

Abstract:

This paper addresses the data cleaning issue as a part of web usage data preprocessing within the scope of Web Usage Mining. Weblog data recorded by web servers within log files reflect usage activity, i.e., End-users’ clicks and underlying user-agents’ hits. As Web Usage Mining is interested in End-users’ behavior, user-agents’ hits are referred to as noise to be cleaned-off before mining. Filtering hits from clicks is not trivial for two reasons, i.e., a server records requests interlaced in sequential order regardless of their source or type, website resources may be set up as requestable interchangeably by end-users and user-agents. The current methods are content-centric based on filtering heuristics of relevant/irrelevant items in terms of some cleaning attributes, i.e., website’s resources filetype extensions, website’s resources pointed by hyperlinks/URIs, http methods, user-agents, etc. These methods need exhaustive extra-weblog data and prior knowledge on the relevant and/or irrelevant items to be assumed as clicks or hits within the filtering heuristics. Such methods are not appropriate for dynamic/responsive Web for three reasons, i.e., resources may be set up to as clickable by end-users regardless of their type, website’s resources are indexed by frame names without filetype extensions, web contents are generated and cancelled differently from an end-user to another. In order to overcome these constraints, a clustering-based cleaning method centered on the logging structure is proposed. This method focuses on the statistical properties of the logging structure at the requested and referring resources attributes levels. It is insensitive to logging content and does not need extra-weblog data. The used statistical property takes on the structure of the generated logging feature by webpage requests in terms of clicks and hits. Since a webpage consists of its single URI and several components, these feature results in a single click to multiple hits ratio in terms of the requested and referring resources. Thus, the clustering-based method is meant to identify two clusters based on the application of the appropriate distance to the frequency matrix of the requested and referring resources levels. As the ratio clicks to hits is single to multiple, the clicks’ cluster is the smallest one in requests number. Hierarchical Agglomerative Clustering based on a pairwise distance (Gower) and average linkage has been applied to four logfiles of dynamic/responsive websites whose click to hits ratio range from 1/2 to 1/15. The optimal clustering set on the basis of average linkage and maximum inter-cluster inertia results always in two clusters. The evaluation of the smallest cluster referred to as clicks cluster under the terms of confusion matrix indicators results in 97% of true positive rate. The content-centric cleaning methods, i.e., conventional and advanced cleaning, resulted in a lower rate 91%. Thus, the proposed clustering-based cleaning outperforms the content-centric methods within dynamic and responsive web design without the need of any extra-weblog. Such an improvement in cleaning quality is likely to refine dependent analysis.

Keywords: clustering approach, data cleaning, data preprocessing, weblog data, web usage data

Procedia PDF Downloads 168
5250 Local Pricing Strategy Should Be the Entry Point of Equitable Benefit Sharing and Poverty Reduction in Community Based Forest Management: Some Evidences from Lowland Community Forestry in Nepal

Authors: Dhruba Khatri

Abstract:

Despite the short history of community based forest management, the community forestry program of Nepal has produced substantial positive effects to organize the local people at a local level institution called Community Forest User Group and manage the local forest resources in the line of poverty reduction since its inception in 1970s. Moreover, each CFUG has collected a community fund from the sale of forest products and non-forestry sources as well and the fund has played a vital role to improve the livelihood of user households living in and around the forests. The specific study sites were selected based on the criteria of i) community forests having dominancy of Sal forests, and ii) forests having 3-5 years experience of community forest management. The price rates of forest products fixed by the CFUGs and the distribution records were collected from the respective community forests. Nonetheless, the relation between pricing strategy and community fund collection revealed that the small change in price of forest products could greatly affect in community fund collection and carry out of forest management, community development, and income generation activities in the line of poverty reduction at local level.

Keywords: benefit sharing, community forest, equitable, Nepal

Procedia PDF Downloads 379
5249 Comparison of Solar Radiation Models

Authors: O. Behar, A. Khellaf, K. Mohammedi, S. Ait Kaci

Abstract:

Up to now, most validation studies have been based on the MBE and RMSE, and therefore, focused only on long and short terms performance to test and classify solar radiation models. This traditional analysis does not take into account the quality of modeling and linearity. In our analysis we have tested 22 solar radiation models that are capable to provide instantaneous direct and global radiation at any given location Worldwide. We introduce a new indicator, which we named Global Accuracy Indicator (GAI) to examine the linear relationship between the measured and predicted values and the quality of modeling in addition to long and short terms performance. Note that the quality of model has been represented by the T-Statistical test, the model linearity has been given by the correlation coefficient and the long and short term performance have been respectively known by the MBE and RMSE. An important founding of this research is that the use GAI allows avoiding default validation when using traditional methodology that might results in erroneous prediction of solar power conversion systems performances.

Keywords: solar radiation model, parametric model, performance analysis, Global Accuracy Indicator (GAI)

Procedia PDF Downloads 346
5248 Analysis of Enhanced Built-up and Bare Land Index in the Urban Area of Yangon, Myanmar

Authors: Su Nandar Tin, Wutjanun Muttitanon

Abstract:

The availability of free global and historical satellite imagery provides a valuable opportunity for mapping and monitoring the year by year for the built-up area, constantly and effectively. Land distribution guidelines and identification of changes are important in preparing and reviewing changes in the ground overview data. This study utilizes Landsat images for thirty years of information to acquire significant, and land spread data that are extremely valuable for urban arranging. This paper is mainly introducing to focus the basic of extracting built-up area for the city development area from the satellite images of LANDSAT 5,7,8 and Sentinel 2A from USGS in every five years. The purpose analyses the changing of the urban built-up area according to the year by year and to get the accuracy of mapping built-up and bare land areas in studying the trend of urban built-up changes the periods from 1990 to 2020. The GIS tools such as raster calculator and built-up area modelling are using in this study and then calculating the indices, which include enhanced built-up and bareness index (EBBI), Normalized difference Built-up index (NDBI), Urban index (UI), Built-up index (BUI) and Normalized difference bareness index (NDBAI) are used to get the high accuracy urban built-up area. Therefore, this study will point out a variable approach to automatically mapping typical enhanced built-up and bare land changes (EBBI) with simple indices and according to the outputs of indexes. Therefore, the percentage of the outputs of enhanced built-up and bareness index (EBBI) of the sentinel-2A can be realized with 48.4% of accuracy than the other index of Landsat images which are 15.6% in 1990 where there is increasing urban expansion area from 43.6% in 1990 to 92.5% in 2020 on the study area for last thirty years.

Keywords: built-up area, EBBI, NDBI, NDBAI, urban index

Procedia PDF Downloads 165
5247 A Lagrangian Hamiltonian Computational Method for Hyper-Elastic Structural Dynamics

Authors: Hosein Falahaty, Hitoshi Gotoh, Abbas Khayyer

Abstract:

Performance of a Hamiltonian based particle method in simulation of nonlinear structural dynamics is subjected to investigation in terms of stability and accuracy. The governing equation of motion is derived based on Hamilton's principle of least action, while the deformation gradient is obtained according to Weighted Least Square method. The hyper-elasticity models of Saint Venant-Kirchhoff and a compressible version similar to Mooney- Rivlin are engaged for the calculation of second Piola-Kirchhoff stress tensor, respectively. Stability along with accuracy of numerical model is verified by reproducing critical stress fields in static and dynamic responses. As the results, although performance of Hamiltonian based model is evaluated as being acceptable in dealing with intense extensional stress fields, however kinds of instabilities reveal in the case of violent collision which can be most likely attributed to zero energy singular modes.

Keywords: Hamilton's principle of least action, particle-based method, hyper-elasticity, analysis of stability

Procedia PDF Downloads 339
5246 The Impact of ChatGPT on the Healthcare Domain: Perspectives from Healthcare Majors

Authors: Su Yen Chen

Abstract:

ChatGPT has shown both strengths and limitations in clinical, educational, and research settings, raising important concerns about accuracy, transparency, and ethical use. Despite an improved understanding of user acceptance and satisfaction, there is still a gap in how general AI perceptions translate into practical applications within healthcare. This study focuses on examining the perceptions of ChatGPT's impact among 266 healthcare majors in Taiwan, exploring its implications for their career development, as well as its utility in clinical practice, medical education, and research. By employing a structured survey with precisely defined subscales, this research aims to probe the breadth of ChatGPT's applications within healthcare, assessing both the perceived benefits and the challenges it presents. Additionally, to further enhance the comprehensiveness of our methodology, we have incorporated qualitative data collection methods, which provide complementary insights to the quantitative findings. The findings from the survey reveal that perceptions and usage of ChatGPT among healthcare majors vary significantly, influenced by factors such as its perceived utility, risk, novelty, and trustworthiness. Graduate students and those who perceive ChatGPT as more beneficial and less risky are particularly inclined to use it more frequently. This increased usage is closely linked to significant impacts on personal career development. Furthermore, ChatGPT's perceived usefulness and novelty contribute to its broader impact within the healthcare domain, suggesting that both innovation and practical utility are key drivers of acceptance and perceived effectiveness in professional healthcare settings. Trust emerges as an important factor, especially in clinical settings where the stakes are high. The trust that healthcare professionals place in ChatGPT significantly affects its integration into clinical practice and influences outcomes in medical education and research. The reliability and practical value of ChatGPT are thus critical for its successful adoption in these areas. However, an interesting paradox arises with regard to the ease of use. While making ChatGPT more user-friendly is generally seen as beneficial, it also raises concerns among users who have lower levels of trust and perceive higher risks associated with its use. This complex interplay between ease of use and safety concerns necessitates a careful balance, highlighting the need for robust security measures and clear, transparent communication about how AI systems work and their limitations. The study suggests several strategic approaches to enhance the adoption and integration of AI in healthcare. These include targeted training programs for healthcare professionals to increase familiarity with AI technologies, reduce perceived risks, and build trust. Ensuring transparency and conducting rigorous testing are also vital to foster trust and reliability. Moreover, comprehensive policy frameworks are needed to guide the implementation of AI technologies, ensuring high standards of patient safety, privacy, and ethical use. These measures are crucial for fostering broader acceptance of AI in healthcare, as the study contributes to enriching the discourse on AI's role by detailing how various factors affect its adoption and impact.

Keywords: ChatGPT, healthcare, survey study, IT adoption, behaviour, applcation, concerns

Procedia PDF Downloads 24
5245 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy

Authors: Chhabi Nigam, S. Ramakrishnan

Abstract:

This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.

Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR

Procedia PDF Downloads 215
5244 Real-Time Lane Marking Detection Using Weighted Filter

Authors: Ayhan Kucukmanisa, Orhan Akbulut, Oguzhan Urhan

Abstract:

Nowadays, advanced driver assistance systems (ADAS) have become popular, since they enable safe driving. Lane detection is a vital step for ADAS. The performance of the lane detection process is critical to obtain a high accuracy lane departure warning system (LDWS). Challenging factors such as road cracks, erosion of lane markings, weather conditions might affect the performance of a lane detection system. In this paper, 1-D weighted filter based on row filtering to detect lane marking is proposed. 2-D input image is filtered by 1-D weighted filter considering four-pixel values located symmetrically around the center of candidate pixel. Performance evaluation is carried out by two metrics which are true positive rate (TPR) and false positive rate (FPR). Experimental results demonstrate that the proposed approach provides better lane marking detection accuracy compared to the previous methods while providing real-time processing performance.

Keywords: lane marking filter, lane detection, ADAS, LDWS

Procedia PDF Downloads 191
5243 The Impact of the Fitness Center Ownership Structure on the Service Quality Perception in the Fitness in Serbia

Authors: Dragan Zivotic, Mirjana Ilic, Aleksandra Perovic, Predrag Gavrilovic

Abstract:

As with the provision of other services, the service quality perception is one of the key factors that the modern manager must pay attention to. Countries in which the state regulation is in transition also have specific features in providing fitness services. Identification of the dimensions in which the most significant different service quality perception between different types of fitness centers, enables managers to profile the offer according to the wishes and expectations of users. The aim of the paper was the comparison of the quality of services perception in the field of fitness in Serbia between three categories of fitness centers: the privately owned centers, the publicly owned centers, and the Public-private partnership centers. For this research 350 respondents of both genders (174 men and 176 women) were interviewed, aged between 18 and 68 years, being beneficiaries of fitness services for at least 1 year. Administered questionnaire with 100 items provided information about the 15 basic areas in which they expressed the service quality perception in the gym. The core sample was composed of 212 service users in private fitness centers, 69 service users in public fitness centers and 69 service users in the public-private partnership. Sub-samples were equal in representation of women and men, as well as by age and length of use of fitness services. The obtained results were subject of univariate analysis with the Kruskal-Wallis non-parametric analysis of variance. Significant differences between the analyzed sub-samples were not found solely in the areas of rapid response and quality outcomes. In the multivariate model, the results were processed by backward stepwise discriminant analysis that extracted 3 areas that maximize the differences between sub-samples: material and technical basis, secondary facilities and coaches. By applying the classification function 93.87% of private centers services users, 62.32% of public centers services users and 85.51% of the public-private partnership centers users of services were correctly classified (total 86.00%). These results allow optimizing the allocation of the necessary resources in profiling offers of a fitness center in order to optimally adjust it to the user’s needs and expectations.

Keywords: fitness, quality perception, management, public ownership, private ownership, public-private partnership, discriminative analysis

Procedia PDF Downloads 291
5242 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 327
5241 Accessible Mobile Augmented Reality App for Art Social Learning Based on Technology Acceptance Model

Authors: Covadonga Rodrigo, Felipe Alvarez Arrieta, Ana Garcia Serrano

Abstract:

Mobile augmented reality technologies have become very popular in the last years in the educational field. Researchers have studied how these technologies improve the engagement of the student and better understanding of the process of learning. But few studies have been made regarding the accessibility of these new technologies applied to digital humanities. The goal of our research is to develop an accessible mobile application with embedded augmented reality main characters of the art work and gamification events accompanied by multi-sensorial activities. The mobile app conducts a learning itinerary around the artistic work, driving the user experience in and out the museum. The learning design follows the inquiry-based methodology and social learning conducted through interaction with social networks. As for the software application, it’s being user-centered designed, following the universal design for learning (UDL) principles to assure the best level of accessibility for all. The mobile augmented reality application starts recognizing a marker from a masterpiece of a museum using the camera of the mobile device. The augmented reality information (history, author, 3D images, audio, quizzes) is shown through virtual main characters that come out from the art work. To comply with the UDL principles, we use a version of the technology acceptance model (TAM) to study the easiness of use and perception of usefulness, extended by the authors with specific indicators for measuring accessibility issues. Following a rapid prototype method for development, the first app has been recently produced, fulfilling the EN 301549 standard and W3C accessibility guidelines for mobile development. A TAM-based web questionnaire with 214 participants with different kinds of disabilities was previously conducted to gather information and feedback on user preferences from the artistic work on the Museo del Prado, the level of acceptance of technology innovations and the easiness of use of mobile elements. Preliminary results show that people with disabilities felt very comfortable while using mobile apps and internet connection. The augmented reality elements seem to offer an added value highly engaging and motivating for the students.

Keywords: H.5.1 (multimedia information systems), artificial, augmented and virtual realities, evaluation/methodology

Procedia PDF Downloads 132
5240 Interactivity as a Predictor of Intent to Revisit Sports Apps

Authors: Young Ik Suh, Tywan G. Martin

Abstract:

Sports apps in a smartphone provide up-to-date information and fast and convenient access to live games. The market of sports apps has emerged as the second fastest growing app category worldwide. Further, many sports fans use their smartphones to know the schedule of sporting events, players’ position and bios, videos and highlights. In recent years, a growing number of scholars and practitioners alike have emphasized the importance of interactivity with sports apps, hypothesizing that interactivity plays a significant role in enticing sports apps users and that it is a key component in measuring the success of sports apps. Interactivity in sports apps focuses primarily on two functions: (1) two-way communication and (2) active user control, neither of which have been applicable through traditional mass media and communication technologies. Therefore, the purpose of this study is to examine whether the interactivity function on sports apps leads to positive outcomes such as intent to revisit. More specifically, this study investigates how three major functions of interactivity (i.e., two-way communication, active user control, and real-time information) influence the attitude of sports apps users and their intent to revisit the sports apps. The following hypothesis is proposed; interactivity functions will be positively associated with both attitudes toward sports apps and intent to revisit sports apps. The survey questionnaire includes four parts: (1) an interactivity scale, (2) an attitude scale, (3) a behavioral intention scale, and (4) demographic questions. Data are to be collected from ESPN apps users. To examine the relationships among the observed and latent variables and determine the reliability and validity of constructs, confirmatory factor analysis (CFA) is conducted. Structural equation modeling (SEM) is utilized to test hypothesized relationships among constructs. Additionally, this study compares the proposed interactivity model with a rival model to identify the role of attitude as a mediating factor. The findings of the current sports apps study provide several theoretical and practical contributions and implications by extending the research and literature associated with the important role of interactivity functions in sports apps and sports media consumption behavior. Specifically, this study may improve the theoretical understandings of whether the interactivity functions influence user attitudes and intent to revisit sports apps. Additionally, this study identifies which dimensions of interactivity are most important to sports apps users. From practitioners’ perspectives, this findings of this study provide significant implications. More entrepreneurs and investors in the sport industry need to recognize that high-resolution photos, live streams, and up-to-date stats are in the sports app, right at sports fans fingertips. The result will imply that sport practitioners may need to develop sports mobile apps that offer greater interactivity functions to attract sport fans.

Keywords: interactivity, two-way communication, active user control, real time information, sports apps, attitude, intent to revisit

Procedia PDF Downloads 145
5239 An Auxiliary Technique for Coronary Heart Disease Prediction by Analyzing Electrocardiogram Based on ResNet and Bi-Long Short-Term Memory

Authors: Yang Zhang, Jian He

Abstract:

Heart disease is one of the leading causes of death in the world, and coronary heart disease (CHD) is one of the major heart diseases. Electrocardiogram (ECG) is widely used in the detection of heart diseases, but the traditional manual method for CHD prediction by analyzing ECG requires lots of professional knowledge for doctors. This paper introduces sliding window and continuous wavelet transform (CWT) to transform ECG signals into images, and then ResNet and Bi-LSTM are introduced to build the ECG feature extraction network (namely ECGNet). At last, an auxiliary system for coronary heart disease prediction was developed based on modified ResNet18 and Bi-LSTM, and the public ECG dataset of CHD from MIMIC-3 was used to train and test the system. The experimental results show that the accuracy of the method is 83%, and the F1-score is 83%. Compared with the available methods for CHD prediction based on ECG, such as kNN, decision tree, VGGNet, etc., this method not only improves the prediction accuracy but also could avoid the degradation phenomenon of the deep learning network.

Keywords: Bi-LSTM, CHD, ECG, ResNet, sliding window

Procedia PDF Downloads 85