Search results for: maximal data sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25280

Search results for: maximal data sets

23210 Interactive IoT-Blockchain System for Big Data Processing

Authors: Abdallah Al-ZoubI, Mamoun Dmour

Abstract:

The spectrum of IoT devices is becoming widely diversified, entering almost all possible fields and finding applications in industry, health, finance, logistics, education, to name a few. The IoT active endpoint sensors and devices exceeded the 12 billion mark in 2021 and are expected to reach 27 billion in 2025, with over $34 billion in total market value. This sheer rise in numbers and use of IoT devices bring with it considerable concerns regarding data storage, analysis, manipulation and protection. IoT Blockchain-based systems have recently been proposed as a decentralized solution for large-scale data storage and protection. COVID-19 has actually accelerated the desire to utilize IoT devices as it impacted both demand and supply and significantly affected several regions due to logistic reasons such as supply chain interruptions, shortage of shipping containers and port congestion. An IoT-blockchain system is proposed to handle big data generated by a distributed network of sensors and controllers in an interactive manner. The system is designed using the Ethereum platform, which utilizes smart contracts, programmed in solidity to execute and manage data generated by IoT sensors and devices. such as Raspberry Pi 4, Rasbpian, and add-on hardware security modules. The proposed system will run a number of applications hosted by a local machine used to validate transactions. It then sends data to the rest of the network through InterPlanetary File System (IPFS) and Ethereum Swarm, forming a closed IoT ecosystem run by blockchain where a number of distributed IoT devices can communicate and interact, thus forming a closed, controlled environment. A prototype has been deployed with three IoT handling units distributed over a wide geographical space in order to examine its feasibility, performance and costs. Initial results indicated that big IoT data retrieval and storage is feasible and interactivity is possible, provided that certain conditions of cost, speed and thorough put are met.

Keywords: IoT devices, blockchain, Ethereum, big data

Procedia PDF Downloads 133
23209 Keynote Talk: The Role of Internet of Things in the Smart Cities Power System

Authors: Abdul-Rahman Al-Ali

Abstract:

As the number of mobile devices is growing exponentially, it is estimated to connect about 50 million devices to the Internet by the year 2020. At the end of this decade, it is expected that an average of eight connected devices per person worldwide. The 50 billion devices are not mobile phones and data browsing gadgets only, but machine-to-machine and man-to-machine devices. With such growing numbers of devices the Internet of Things (I.o.T) concept is one of the emerging technologies as of recently. Within the smart grid technologies, smart home appliances, Intelligent Electronic Devices (IED) and Distributed Energy Resources (DER) are major I.o.T objects that can be addressable using the IPV6. These objects are called the smart grid internet of things (SG-I.o.T). The SG-I.o.T generates big data that requires high-speed computing infrastructure, widespread computer networks, big data storage, software, and platforms services. A company’s utility control and data centers cannot handle such a large number of devices, high-speed processing, and massive data storage. Building large data center’s infrastructure takes a long time, it also requires widespread communication networks and huge capital investment. To maintain and upgrade control and data centers’ infrastructure and communication networks as well as updating and renewing software licenses which collectively, requires additional cost. This can be overcome by utilizing the emerging computing paradigms such as cloud computing. This can be used as a smart grid enabler to replace the legacy of utilities data centers. The talk will highlight the role of I.o.T, cloud computing services and their development models within the smart grid technologies.

Keywords: intelligent electronic devices (IED), distributed energy resources (DER), internet, smart home appliances

Procedia PDF Downloads 308
23208 Statistical Analysis of Interferon-γ for the Effectiveness of an Anti-Tuberculous Treatment

Authors: Shishen Xie, Yingda L. Xie

Abstract:

Tuberculosis (TB) is a potentially serious infectious disease that remains a health concern. The Interferon Gamma Release Assay (IGRA) is a blood test to find out if an individual is tuberculous positive or negative. This study applies statistical analysis to the clinical data of interferon-gamma levels of seventy-three subjects who diagnosed pulmonary TB in an anti-tuberculous treatment. Data analysis is performed to determine if there is a significant decline in interferon-gamma levels for the subjects during a period of six months, and to infer if the anti-tuberculous treatment is effective.

Keywords: data analysis, interferon gamma release assay, statistical methods, tuberculosis infection

Procedia PDF Downloads 293
23207 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components

Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea

Abstract:

Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.

Keywords: assessment, part of speech, sentiment analysis, student feedback

Procedia PDF Downloads 127
23206 Fast Fourier Transform-Based Steganalysis of Covert Communications over Streaming Media

Authors: Jinghui Peng, Shanyu Tang, Jia Li

Abstract:

Steganalysis seeks to detect the presence of secret data embedded in cover objects, and there is an imminent demand to detect hidden messages in streaming media. This paper shows how a steganalysis algorithm based on Fast Fourier Transform (FFT) can be used to detect the existence of secret data embedded in streaming media. The proposed algorithm uses machine parameter characteristics and a network sniffer to determine whether the Internet traffic contains streaming channels. The detected streaming data is then transferred from the time domain to the frequency domain through FFT. The distributions of power spectra in the frequency domain between original VoIP streams and stego VoIP streams are compared in turn using t-test, achieving the p-value of 7.5686E-176 which is below the threshold. The results indicate that the proposed FFT-based steganalysis algorithm is effective in detecting the secret data embedded in VoIP streaming media.

Keywords: steganalysis, security, Fast Fourier Transform, streaming media

Procedia PDF Downloads 130
23205 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal

Authors: A. D. Rao, Sachiko Mohanty

Abstract:

The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.

Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal

Procedia PDF Downloads 155
23204 Privacy-Preserving Model for Social Network Sites to Prevent Unwanted Information Diffusion

Authors: Sanaz Kavianpour, Zuraini Ismail, Bharanidharan Shanmugam

Abstract:

Social Network Sites (SNSs) can be served as an invaluable platform to transfer the information across a large number of individuals. A substantial component of communicating and managing information is to identify which individual will influence others in propagating information and also whether dissemination of information in the absence of social signals about that information will be occurred or not. Classifying the final audience of social data is difficult as controlling the social contexts which transfers among individuals are not completely possible. Hence, undesirable information diffusion to an unauthorized individual on SNSs can threaten individuals’ privacy. This paper highlights the information diffusion in SNSs and moreover it emphasizes the most significant privacy issues to individuals of SNSs. The goal of this paper is to propose a privacy-preserving model that has urgent regards with individuals’ data in order to control availability of data and improve privacy by providing access to the data for an appropriate third parties without compromising the advantages of information sharing through SNSs.

Keywords: anonymization algorithm, classification algorithm, information diffusion, privacy, social network sites

Procedia PDF Downloads 309
23203 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 438
23202 Text Mining of Twitter Data Using a Latent Dirichlet Allocation Topic Model and Sentiment Analysis

Authors: Sidi Yang, Haiyi Zhang

Abstract:

Twitter is a microblogging platform, where millions of users daily share their attitudes, views, and opinions. Using a probabilistic Latent Dirichlet Allocation (LDA) topic model to discern the most popular topics in the Twitter data is an effective way to analyze a large set of tweets to find a set of topics in a computationally efficient manner. Sentiment analysis provides an effective method to show the emotions and sentiments found in each tweet and an efficient way to summarize the results in a manner that is clearly understood. The primary goal of this paper is to explore text mining, extract and analyze useful information from unstructured text using two approaches: LDA topic modelling and sentiment analysis by examining Twitter plain text data in English. These two methods allow people to dig data more effectively and efficiently. LDA topic model and sentiment analysis can also be applied to provide insight views in business and scientific fields.

Keywords: text mining, Twitter, topic model, sentiment analysis

Procedia PDF Downloads 162
23201 Navigating Government Finance Statistics: Effortless Retrieval and Comparative Analysis through Data Science and Machine Learning

Authors: Kwaku Damoah

Abstract:

This paper presents a methodology and software application (App) designed to empower users in accessing, retrieving, and comparatively exploring data within the hierarchical network framework of the Government Finance Statistics (GFS) system. It explores the ease of navigating the GFS system and identifies the gaps filled by the new methodology and App. The GFS, embodies a complex Hierarchical Network Classification (HNC) structure, encapsulating institutional units, revenues, expenses, assets, liabilities, and economic activities. Navigating this structure demands specialized knowledge, experience, and skill, posing a significant challenge for effective analytics and fiscal policy decision-making. Many professionals encounter difficulties deciphering these classifications, hindering confident utilization of the system. This accessibility barrier obstructs a vast number of professionals, students, policymakers, and the public from leveraging the abundant data and information within the GFS. Leveraging R programming language, Data Science Analytics and Machine Learning, an efficient methodology enabling users to access, navigate, and conduct exploratory comparisons was developed. The machine learning Fiscal Analytics App (FLOWZZ) democratizes access to advanced analytics through its user-friendly interface, breaking down expertise barriers.

Keywords: data science, data wrangling, drilldown analytics, government finance statistics, hierarchical network classification, machine learning, web application.

Procedia PDF Downloads 50
23200 The Impact of Motivation, Trust, and National Cultural Differences on Knowledge Sharing within the Context of Electronic Mail

Authors: Said Abdullah Al Saifi

Abstract:

The goal of this research is to examine the impact of trust, motivation, and national culture on knowledge sharing within the context of electronic mail. This study is quantitative and survey based. In order to conduct the research, 200 students from a leading university in New Zealand were chosen randomly to participate in a questionnaire survey. Motivation and trust were found to be significantly and positively related to knowledge sharing. The research findings illustrated that face saving, face gaining, and individualism positively moderates the relationship between motivation and knowledge sharing. In addition, collectivism culture negatively moderates the relationship between motivation and knowledge sharing. Moreover, the research findings reveal that face saving, individualism, and collectivism culture positively moderate the relationship between trust and knowledge sharing. In addition, face gaining culture negatively moderates the relationship between trust and knowledge sharing. This study sets out several implications for researchers and practitioners. The study produces an integrative model that shows how attributes of national culture impact knowledge sharing through the use of emails. A better understanding of the relationship between knowledge sharing and trust, motivation, and national culture differences will increase individuals’ ability to make wise choices when sharing knowledge with those from different cultures.

Keywords: knowledge sharing, motivation, national culture, trust

Procedia PDF Downloads 337
23199 Value Chain Based New Business Opportunity

Authors: Seonjae Lee, Sungjoo Lee

Abstract:

Excavation is necessary to remain competitive in the current business environment. The company survived the rapidly changing industry conditions by adapting new business strategy and reducing technology challenges. Traditionally, the two methods are conducted excavations for new businesses. The first method is, qualitative analysis of expert opinion, which is gathered through opportunities and secondly, new technologies are discovered through quantitative data analysis of method patents. The second method increases time and cost. Patent data is restricted for use and the purpose of discovering business opportunities. This study presents the company's characteristics (sector, size, etc.), of new business opportunities in customized form by reviewing the value chain perspective and to contributing to creating new business opportunities in the proposed model. It utilizes the trademark database of the Korean Intellectual Property Office (KIPO) and proprietary company information database of the Korea Enterprise Data (KED). This data is key to discovering new business opportunities with analysis of competitors and advanced business trademarks (Module 1) and trading analysis of competitors found in the KED (Module 2).

Keywords: value chain, trademark, trading analysis, new business opportunity

Procedia PDF Downloads 355
23198 Towards Addressing the Cultural Snapshot Phenomenon in Cultural Mapping Libraries

Authors: Mousouris Spiridon, Kavakli Evangelia

Abstract:

This paper focuses on Digital Libraries (DLs) that contain and geovisualise cultural data, highlighting the need to define them as a separate category termed Cultural Mapping Libraries, based on their inherent connection of culture with geographic location and their design requirements in support of visual representation of cultural data on the map. An exploratory analysis of DLs that conform to the above definition brought forward the observation that existing Cultural Mapping Libraries fail to geovisualise the entirety of cultural data per point of interest thus resulting in a Cultural Snapshot phenomenon. The existence of this phenomenon was reinforced by the results of a systematic bibliographic research. In order to address the Cultural Snapshot, this paper proposes the use of the Semantic Web principles to efficiently interconnect spatial cultural data through time, per geographic location. In this way points of interest are transformed into scenery where culture evolves over time. This evolution is expressed as occurrences taking place chronologically, in an event oriented approach, a conceptualization also endorsed by the CIDOC Conceptual Reference Model (CIDOC CRM). In particular, we posit the use of CIDOC CRM as the baseline for defining the logic of Cultural Mapping Libraries as part of the Culture Domain in accordance with the Digital Library Reference Model, in order to define the rules of cultural data management by the system. Our future goal is to transform this conceptual definition in to inferencing rules that resolve the Cultural Snapshot and lead to a more complete geovisualisation of cultural data.

Keywords: digital libraries, semantic web, geovisualization, CIDOC-CRM

Procedia PDF Downloads 89
23197 An Evaluation of the Impact of E-Banking on Operational Efficiency of Banks in Nigeria

Authors: Ibrahim Rabiu Darazo

Abstract:

The research has been conducted on the impact of E-banking on the operational efficiency of Banks in Nigeria, A case of some selected banks (Diamond Bank Plc, GTBankPlc, and Fidelity Bank Plc) in Nigeria. The research is a quantitative research which uses both primary and secondary sources of data collection. Questionnaire were used to obtained accurate data, where 150 Questionnaire were distributed among staff and customers of the three Banks , and the data collected where analysed using chi-square, whereas the secondary data where obtained from relevant text books, journals and relevant web sites. It is clear from the findings that, the use of e-banking by the banks has improved the efficiency of these banks, in terms of providing efficient services to customers electronically, using Internet Banking, Telephone Banking ATMs, reducing time taking to serve customers, e-banking allow new customers to open an account online, customers have access to their account at all the time 24/7.E-banking provide access to customers information from the data base and cost of check and postage were eliminated using e-banking. The recommendation at the end of the research include; the Banks should try to update their electronic gadgets, e-fraud(internal & external) should also be controlled, Banks shall employ qualified man power, Biometric ATMs shall be introduce to reduce fraud using ATM Cards, as it is use in other countries like USA.

Keywords: banks, electronic banking, operational efficiency of banks, biometric ATMs

Procedia PDF Downloads 310
23196 Optimize Data Evaluation Metrics for Fraud Detection Using Machine Learning

Authors: Jennifer Leach, Umashanger Thayasivam

Abstract:

The use of technology has benefited society in more ways than one ever thought possible. Unfortunately, though, as society’s knowledge of technology has advanced, so has its knowledge of ways to use technology to manipulate people. This has led to a simultaneous advancement in the world of fraud. Machine learning techniques can offer a possible solution to help decrease this advancement. This research explores how the use of various machine learning techniques can aid in detecting fraudulent activity across two different types of fraudulent data, and the accuracy, precision, recall, and F1 were recorded for each method. Each machine learning model was also tested across five different training and testing splits in order to discover which testing split and technique would lead to the most optimal results.

Keywords: data science, fraud detection, machine learning, supervised learning

Procedia PDF Downloads 173
23195 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria

Authors: O. O. Aiyelokun, O. A. Agbede

Abstract:

Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.

Keywords: boundary condition, goodness of fit, groundwater, satellite-based data

Procedia PDF Downloads 112
23194 Developing Problem Solving Skills through a Project-Based Course as Part of a Lifelong Learning for Engineering Students

Authors: Robin Lok Wang Ma

Abstract:

The purpose of this paper is to investigate how engineering students’ motivation and interests are maintained in their journeys. In recent years, different pedagogies of teaching, including entrepreneurship, experiential and lifelong learning, as well as dream builder, etc., have been widely used for education purposes. University advocates hands-on practice, learning by experiencing and experimenting throughout different courses. Students are not limited to gaining knowledge via traditional lectures, laboratory demonstrations, tutorials, and so on. The capability to identify both complex problems and their corresponding solutions in daily life are one of the criteria/skill sets required for graduates to obtain their careers at professional organizations and companies. A project-based course, namely Mechatronic Design and Prototyping, was developed for students to design and build a physical prototype for solving existing problems in their daily lives, thereby encouraging them as an entrepreneur to explore further possibilities to commercialize their designed prototypes and launch them to the market. Feedbacks from students show that they are keen to propose their own ideas freely with guidance from the instructor instead of using either suggested or assigned topics. Proposed ideas of the prototypes reflect that if students’ interests are maintained, they acquire the knowledge and skills they need, including essential communication, logical thinking, and, more importantly, problem solving for their lifelong learning journey.

Keywords: problem solving, lifelong learning, entrepreneurship, engineering

Procedia PDF Downloads 80
23193 Prediction of Embankment Fires at Railway Infrastructure Using Machine Learning, Geospatial Data and VIIRS Remote Sensing Imagery

Authors: Jan-Peter Mund, Christian Kind

Abstract:

In view of the ongoing climate change and global warming, fires along railways in Germany are occurring more frequently, with sometimes massive consequences for railway operations and affected railroad infrastructure. In the absence of systematic studies within the infrastructure network of German Rail, little is known about the causes of such embankment fires. Since a further increase in these hazards is to be expected in the near future, there is a need for a sound knowledge of triggers and drivers for embankment fires as well as methodical knowledge of prediction tools. Two predictable future trends speak for the increasing relevance of the topic: through the intensification of the use of rail for passenger and freight transport (e.g..: doubling of annual passenger numbers by 2030, compared to 2019), there will be more rail traffic and also more maintenance and construction work on the railways. This research project approach uses satellite data to identify historical embankment fires along rail network infrastructure. The team links data from these fires with infrastructure and weather data and trains a machine-learning model with the aim of predicting fire hazards on sections of the track. Companies reflect on the results and use them on a pilot basis in precautionary measures.

Keywords: embankment fires, railway maintenance, machine learning, remote sensing, VIIRS data

Procedia PDF Downloads 77
23192 A Hybrid Data Mining Algorithm Based System for Intelligent Defence Mission Readiness and Maintenance Scheduling

Authors: Shivam Dwivedi, Sumit Prakash Gupta, Durga Toshniwal

Abstract:

It is a challenging task in today’s date to keep defence forces in the highest state of combat readiness with budgetary constraints. A huge amount of time and money is squandered in the unnecessary and expensive traditional maintenance activities. To overcome this limitation Defence Intelligent Mission Readiness and Maintenance Scheduling System has been proposed, which ameliorates the maintenance system by diagnosing the condition and predicting the maintenance requirements. Based on new data mining algorithms, this system intelligently optimises mission readiness for imminent operations and maintenance scheduling in repair echelons. With modified data mining algorithms such as Weighted Feature Ranking Genetic Algorithm and SVM-Random Forest Linear ensemble, it improves the reliability, availability and safety, alongside reducing maintenance cost and Equipment Out of Action (EOA) time. The results clearly conclude that the introduced algorithms have an edge over the conventional data mining algorithms. The system utilizing the intelligent condition-based maintenance approach improves the operational and maintenance decision strategy of the defence force.

Keywords: condition based maintenance, data mining, defence maintenance, ensemble, genetic algorithms, maintenance scheduling, mission capability

Procedia PDF Downloads 278
23191 Using Emerging Hot Spot Analysis to Analyze Overall Effectiveness of Policing Policy and Strategy in Chicago

Authors: Tyler Gill, Sophia Daniels

Abstract:

The paper examines how accessing the spatial-temporal constrains of data will help inform policymakers and law enforcement officials. The authors utilize Chicago crime data from 2006-2016 to demonstrate how the Emerging Hot Spot Tool is an ideal hot spot clustering approach to analyze crime data. Traditional approaches include density maps or creating a spatial weights matrix to include the spatial-temporal constrains. This new approach utilizes a space-time implementation of the Getis-Ord Gi* statistic to visualize the data more quickly to make better decisions. The research will help complement socio-cultural research to find key patterns to help frame future policies and evaluate the implementation of prior strategies. Through this analysis, homicide trends and patterns are found more effectively and recommendations for use by non-traditional users of GIS are offered for real life implementation.

Keywords: crime mapping, emerging hot spot analysis, Getis-Ord Gi*, spatial-temporal analysis

Procedia PDF Downloads 230
23190 Active Learning in Engineering Courses Using Excel Spreadsheet

Authors: Promothes Saha

Abstract:

Recently, transportation engineering industry members at the study university showed concern that students lacked the skills needed to solve real-world engineering problems using spreadsheet data analysis. In response to the concerns shown by industry members, this study investigated how to engage students in a better way by incorporating spreadsheet analysis during class - also, help them learn the course topics. Helping students link theoretical knowledge to real-world problems can be a challenge. In this effort, in-class activities and worksheets were redesigned to integrate with Excel to solve example problems using built-in tools including cell referencing, equations, data analysis tool pack, solver tool, conditional formatting, charts, etc. The effectiveness of this technique was investigated using students’ evaluations of the course, enrollment data, and students’ comments. Based on the data of those criteria, it is evident that the spreadsheet activities may increase student learning.

Keywords: civil, engineering, active learning, transportation

Procedia PDF Downloads 128
23189 Understanding Cruise Passengers’ On-board Experience throughout the Customer Decision Journey

Authors: Sabina Akter, Osiris Valdez Banda, Pentti Kujala, Jani Romanoff

Abstract:

This paper examines the relationship between on-board environmental factors and customer overall satisfaction in the context of the cruise on-board experience. The on-board environmental factors considered are ambient, layout/design, social, product/service and on-board enjoyment factors. The study presents a data-driven framework and model for the on-board cruise experience. The data are collected from 893 respondents in an application of a self-administered online questionnaire of their cruise experience. This study reveals the cruise passengers’ on-board experience through the customer decision journey based on the publicly available data. Pearson correlation and regression analysis have been applied, and the results show a positive and a significant relationship between the environmental factors and on-board experience. These data help understand the cruise passengers’ on-board experience, which will be used for the ultimate decision-making process in cruise ship design.

Keywords: cruise behavior, customer activities, on-board environmental factors, on-board experience, user or customer satisfaction

Procedia PDF Downloads 157
23188 Rapid Detection and Differentiation of Camel Pox, Contagious Ecthyma and Papilloma Viruses in Clinical Samples of Camels Using a Multiplex PCR

Authors: A. I. Khalafalla, K. A. Al-Busada, I. M. El-Sabagh

Abstract:

Pox and pox-like diseases of camels are a group of exanthematous skin conditions that have become increasingly important economically. They may be caused by three distinct viruses: camelpox virus (CMPV), camel contagious ecthyma virus (CCEV) and camel papillomavirus (CAPV). These diseases are difficult to differentiate based on clinical presentation in disease outbreaks. Molecular methods such as PCR targeting species-specific genes have been developed and used to identify CMPV and CCEV, but not simultaneously in a single tube. Recently, multiplex PCR has gained reputation as a convenient diagnostic method with cost- and time–saving benefits. In the present communication, we describe the development, optimization and validation a multiplex PCR assays able to detect simultaneously the genome of the three viruses in one single test allowing for rapid and efficient molecular diagnosis. The assay was developed based on the evaluation and combination of published and new primer sets, and was applied to the detection of 110 tissue samples. The method showed high sensitivity, and the specificity was confirmed by PCR-product sequencing. In conclusion, this rapid, sensitive and specific assay is considered a useful method for identifying three important viruses in specimens from camels and as part of a molecular diagnostic regime.

Keywords: multiplex PCR, diagnosis, pox and pox-like diseases, camels

Procedia PDF Downloads 451
23187 Holistic Risk Assessment Based on Continuous Data from the User’s Behavior and Environment

Authors: Cinzia Carrodano, Dimitri Konstantas

Abstract:

Risk is part of our lives. In today’s society risk is connected to our safety and safety has become a major priority in our life. Each person lives his/her life based on the evaluation of the risk he/she is ready to accept and sustain, and the level of safety he/she wishes to reach, based on highly personal criteria. The assessment of risk a person takes in a complex environment and the impact of actions of other people’actions and events on our perception of risk are alements to be considered. The concept of Holistic Risk Assessment (HRA) aims in developing a methodology and a model that will allow us to take into account elements outside the direct influence of the individual, and provide a personalized risk assessment. The concept is based on the fact that in the near future, we will be able to gather and process extremely large amounts of data about an individual and his/her environment in real time. The interaction and correlation of these data is the key element of the holistic risk assessment. In this paper, we present the HRA concept and describe the most important elements and considerations.

Keywords: continuous data, dynamic risk, holistic risk assessment, risk concept

Procedia PDF Downloads 109
23186 A Comparative Analysis of Classification Models with Wrapper-Based Feature Selection for Predicting Student Academic Performance

Authors: Abdullah Al Farwan, Ya Zhang

Abstract:

In today’s educational arena, it is critical to understand educational data and be able to evaluate important aspects, particularly data on student achievement. Educational Data Mining (EDM) is a research area that focusing on uncovering patterns and information in data from educational institutions. Teachers, if they are able to predict their students' class performance, can use this information to improve their teaching abilities. It has evolved into valuable knowledge that can be used for a wide range of objectives; for example, a strategic plan can be used to generate high-quality education. Based on previous data, this paper recommends employing data mining techniques to forecast students' final grades. In this study, five data mining methods, Decision Tree, JRip, Naive Bayes, Multi-layer Perceptron, and Random Forest with wrapper feature selection, were used on two datasets relating to Portuguese language and mathematics classes lessons. The results showed the effectiveness of using data mining learning methodologies in predicting student academic success. The classification accuracy achieved with selected algorithms lies in the range of 80-94%. Among all the selected classification algorithms, the lowest accuracy is achieved by the Multi-layer Perceptron algorithm, which is close to 70.45%, and the highest accuracy is achieved by the Random Forest algorithm, which is close to 94.10%. This proposed work can assist educational administrators to identify poor performing students at an early stage and perhaps implement motivational interventions to improve their academic success and prevent educational dropout.

Keywords: classification algorithms, decision tree, feature selection, multi-layer perceptron, Naïve Bayes, random forest, students’ academic performance

Procedia PDF Downloads 151
23185 A Novel Framework for User-Friendly Ontology-Mediated Access to Relational Databases

Authors: Efthymios Chondrogiannis, Vassiliki Andronikou, Efstathios Karanastasis, Theodora Varvarigou

Abstract:

A large amount of data is typically stored in relational databases (DB). The latter can efficiently handle user queries which intend to elicit the appropriate information from data sources. However, direct access and use of this data requires the end users to have an adequate technical background, while they should also cope with the internal data structure and values presented. Consequently the information retrieval is a quite difficult process even for IT or DB experts, taking into account the limited contributions of relational databases from the conceptual point of view. Ontologies enable users to formally describe a domain of knowledge in terms of concepts and relations among them and hence they can be used for unambiguously specifying the information captured by the relational database. However, accessing information residing in a database using ontologies is feasible, provided that the users are keen on using semantic web technologies. For enabling users form different disciplines to retrieve the appropriate data, the design of a Graphical User Interface is necessary. In this work, we will present an interactive, ontology-based, semantically enable web tool that can be used for information retrieval purposes. The tool is totally based on the ontological representation of underlying database schema while it provides a user friendly environment through which the users can graphically form and execute their queries.

Keywords: ontologies, relational databases, SPARQL, web interface

Procedia PDF Downloads 263
23184 Anomaly Detection in Financial Markets Using Tucker Decomposition

Authors: Salma Krafessi

Abstract:

The financial markets have a multifaceted, intricate environment, and enormous volumes of data are produced every day. To find investment possibilities, possible fraudulent activity, and market oddities, accurate anomaly identification in this data is essential. Conventional methods for detecting anomalies frequently fail to capture the complex organization of financial data. In order to improve the identification of abnormalities in financial time series data, this study presents Tucker Decomposition as a reliable multi-way analysis approach. We start by gathering closing prices for the S&P 500 index across a number of decades. The information is converted to a three-dimensional tensor format, which contains internal characteristics and temporal sequences in a sliding window structure. The tensor is then broken down using Tucker Decomposition into a core tensor and matching factor matrices, allowing latent patterns and relationships in the data to be captured. A possible sign of abnormalities is the reconstruction error from Tucker's Decomposition. We are able to identify large deviations that indicate unusual behavior by setting a statistical threshold. A thorough examination that contrasts the Tucker-based method with traditional anomaly detection approaches validates our methodology. The outcomes demonstrate the superiority of Tucker's Decomposition in identifying intricate and subtle abnormalities that are otherwise missed. This work opens the door for more research into multi-way data analysis approaches across a range of disciplines and emphasizes the value of tensor-based methods in financial analysis.

Keywords: tucker decomposition, financial markets, financial engineering, artificial intelligence, decomposition models

Procedia PDF Downloads 45
23183 A Diagnostic Comparative Analysis of on Simultaneous Localization and Mapping (SLAM) Models for Indoor and Outdoor Route Planning and Obstacle Avoidance

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In robotics literature, the simultaneous localization and mapping (SLAM) is commonly associated with a priori-posteriori problem. The autonomous vehicle needs a neutral map to spontaneously track its local position, i.e., “localization” while at the same time a precise path estimation of the environment state is required for effective route planning and obstacle avoidance. On the other hand, the environmental noise factors can significantly intensify the inherent uncertainties in using odometry information and measurements obtained from the robot’s exteroceptive sensor which in return directly affect the overall performance of the corresponding SLAM. Therefore, the current work is primarily dedicated to provide a diagnostic analysis of six SLAM algorithms including FastSLAM, L-SLAM, GraphSLAM, Grid SLAM and DP-SLAM. A SLAM simulated environment consisting of two sets of landmark locations and robot waypoints was set based on modified EKF and UKF in MATLAB using two separate maps for indoor and outdoor route planning subject to natural and artificial obstacles. The simulation results are expected to provide an unbiased platform to compare the estimation performances of the five SLAM models as well as on the reliability of each SLAM model for indoor and outdoor applications.

Keywords: route planning, obstacle, estimation performance, FastSLAM, L-SLAM, GraphSLAM, Grid SLAM, DP-SLAM

Procedia PDF Downloads 434
23182 Analyzing the Relationship between the Spatial Characteristics of Cultural Structure, Activities, and the Tourism Demand

Authors: Deniz Karagöz

Abstract:

This study is attempt to comprehend the relationship between the spatial characteristics of cultural structure, activities and the tourism demand in Turkey. The analysis divided into four parts. The first part consisted of a cultural structure and cultural activity (CSCA) index provided by principal component analysis. The analysis determined four distinct dimensions, namely, cultural activity/structure, accessing culture, consumption, and cultural management. The exploratory spatial data analysis employed to determine the spatial models of cultural structure and cultural activities in 81 provinces in Turkey. Global Moran I indices is used to ascertain the cultural activities and the structural clusters. Finally, the relationship between the cultural activities/cultural structure and tourism demand was analyzed. The raw/original data of the study official databases. The data on the cultural structure and activities gathered from the Turkish Statistical Institute and the data related to the tourism demand was provided by the Republic of Turkey Ministry of Culture and Tourism.

Keywords: cultural activities, cultural structure, spatial characteristics, tourism demand, Turkey

Procedia PDF Downloads 541
23181 The Synergistic Effects of Blockchain and AI on Enhancing Data Integrity and Decision-Making Accuracy in Smart Contracts

Authors: Sayor Ajfar Aaron, Sajjat Hossain Abir, Ashif Newaz, Mushfiqur Rahman

Abstract:

Investigating the convergence of blockchain technology and artificial intelligence, this paper examines their synergistic effects on data integrity and decision-making within smart contracts. By implementing AI-driven analytics on blockchain-based platforms, the research identifies improvements in automated contract enforcement and decision accuracy. The paper presents a framework that leverages AI to enhance transparency and trust while blockchain ensures immutable record-keeping, culminating in significantly optimized operational efficiencies in various industries.

Keywords: artificial intelligence, blockchain, data integrity, smart contracts

Procedia PDF Downloads 34