Search results for: data mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7524

Search results for: data mining

6384 Applying Hybrid Graph Drawing and Clustering Methods on Stock Investment Analysis

Authors: Mouataz Zreika, Maria Estela Varua

Abstract:

Stock investment decisions are often made based on current events of the global economy and the analysis of historical data. Conversely, visual representation could assist investors’ gain deeper understanding and better insight on stock market trends more efficiently. The trend analysis is based on long-term data collection. The study adopts a hybrid method that combines the Clustering algorithm and Force-directed algorithm to overcome the scalability problem when visualizing large data. This method exemplifies the potential relationships between each stock, as well as determining the degree of strength and connectivity, which will provide investors another understanding of the stock relationship for reference. Information derived from visualization will also help them make an informed decision. The results of the experiments show that the proposed method is able to produced visualized data aesthetically by providing clearer views for connectivity and edge weights.

Keywords: Clustering, force-directed, graph drawing, stock investment analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1580
6383 Implementing an Intuitive Reasoner with a Large Weather Database

Authors: Yung-Chien Sun, O. Grant Clark

Abstract:

In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.

Keywords: Artificial intelligence, intuition, knowledge acquisition, limited certainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
6382 Multiphase Coexistence for Aqueous System with Hydrophilic Agent

Authors: G. B. Hong, H. W. Chen

Abstract:

Liquid-Liquid Equilibrium (LLE) data are measured for the ternary mixtures of water + 1-butanol + butyl acetate and quaternary mixtures of water + 1-butanol + butyl acetate + glycerol at atmospheric pressure at 313.15 K. In addition, isothermal vapor–liquid–liquid equilibrium (VLLE) data are determined experimentally at 333.15 K. The region of heterogeneity is found to increase as the hydrophilic agent (glycerol) is introduced into the aqueous mixtures. The experimental data are correlated with the NRTL model. The predicted results from the solution model with the model parameters determined from the constituent binaries are also compared with the experimental values.

Keywords: LLE, VLLE, hydrophilic agent, NRTL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1263
6381 Categorical Missing Data Imputation Using Fuzzy Neural Networks with Numerical and Categorical Inputs

Authors: Pilar Rey-del-Castillo, Jesús Cardeñosa

Abstract:

There are many situations where input feature vectors are incomplete and methods to tackle the problem have been studied for a long time. A commonly used procedure is to replace each missing value with an imputation. This paper presents a method to perform categorical missing data imputation from numerical and categorical variables. The imputations are based on Simpson-s fuzzy min-max neural networks where the input variables for learning and classification are just numerical. The proposed method extends the input to categorical variables by introducing new fuzzy sets, a new operation and a new architecture. The procedure is tested and compared with others using opinion poll data.

Keywords: Classifier, imputation techniques, fuzzy systems, fuzzy min-max neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1751
6380 A Hidden Markov Model for Modeling Pavement Deterioration under Incomplete Monitoring Data

Authors: Nam Lethanh, Bryan T. Adey

Abstract:

In this paper, the potential use of an exponential hidden Markov model to model a hidden pavement deterioration process, i.e. one that is not directly measurable, is investigated. It is assumed that the evolution of the physical condition, which is the hidden process, and the evolution of the values of pavement distress indicators, can be adequately described using discrete condition states and modeled as a Markov processes. It is also assumed that condition data can be collected by visual inspections over time and represented continuously using an exponential distribution. The advantage of using such a model in decision making process is illustrated through an empirical study using real world data.

Keywords: Deterioration modeling, Exponential distribution, Hidden Markov model, Pavement management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2287
6379 Automated Knowledge Engineering

Authors: Sandeep Chandana, Rene V. Mayorga, Christine W. Chan

Abstract:

This article outlines conceptualization and implementation of an intelligent system capable of extracting knowledge from databases. Use of hybridized features of both the Rough and Fuzzy Set theory render the developed system flexibility in dealing with discreet as well as continuous datasets. A raw data set provided to the system, is initially transformed in a computer legible format followed by pruning of the data set. The refined data set is then processed through various Rough Set operators which enable discovery of parameter relationships and interdependencies. The discovered knowledge is automatically transformed into a rule base expressed in Fuzzy terms. Two exemplary cancer repository datasets (for Breast and Lung Cancer) have been used to test and implement the proposed framework.

Keywords: Knowledge Extraction, Fuzzy Sets, Rough Sets, Neuro–Fuzzy Systems, Databases

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
6378 A Real-Time Signal Processing Technique for MIDI Generation

Authors: Farshad Arvin, Shyamala Doraisamy

Abstract:

This paper presents a new hardware interface using a microcontroller which processes audio music signals to standard MIDI data. A technique for processing music signals by extracting note parameters from music signals is described. An algorithm to convert the voice samples for real-time processing without complex calculations is proposed. A high frequency microcontroller as the main processor is deployed to execute the outlined algorithm. The MIDI data generated is transmitted using the EIA-232 protocol. The analyses of data generated show the feasibility of using microcontrollers for real-time MIDI generation hardware interface.

Keywords: Signal processing, MIDI, Microcontroller, EIA-232.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2111
6377 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: Data fusion, Gaussian process regression, signal denoise, temporal extrapolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477
6376 An Energy Aware Data Aggregation in Wireless Sensor Network Using Connected Dominant Set

Authors: M. Santhalakshmi, P Suganthi

Abstract:

Wireless Sensor Networks (WSNs) have many advantages. Their deployment is easier and faster than wired sensor networks or other wireless networks, as they do not need fixed infrastructure. Nodes are partitioned into many small groups named clusters to aggregate data through network organization. WSN clustering guarantees performance achievement of sensor nodes. Sensor nodes energy consumption is reduced by eliminating redundant energy use and balancing energy sensor nodes use over a network. The aim of such clustering protocols is to prolong network life. Low Energy Adaptive Clustering Hierarchy (LEACH) is a popular protocol in WSN. LEACH is a clustering protocol in which the random rotations of local cluster heads are utilized in order to distribute energy load among all sensor nodes in the network. This paper proposes Connected Dominant Set (CDS) based cluster formation. CDS aggregates data in a promising approach for reducing routing overhead since messages are transmitted only within virtual backbone by means of CDS and also data aggregating lowers the ratio of responding hosts to the hosts existing in virtual backbones. CDS tries to increase networks lifetime considering such parameters as sensors lifetime, remaining and consumption energies in order to have an almost optimal data aggregation within networks. Experimental results proved CDS outperformed LEACH regarding number of cluster formations, average packet loss rate, average end to end delay, life computation, and remaining energy computation.

Keywords: Wireless sensor network, connected dominant set, clustering, data aggregation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1117
6375 Deadline Missing Prediction for Mobile Robots through the Use of Historical Data

Authors: Edwaldo R. B. Monteiro, Patricia D. M. Plentz, Edson R. De Pieri

Abstract:

Mobile robotics is gaining an increasingly important role in modern society. Several potentially dangerous or laborious tasks for human are assigned to mobile robots, which are increasingly capable. Many of these tasks need to be performed within a specified period, i.e, meet a deadline. Missing the deadline can result in financial and/or material losses. Mechanisms for predicting the missing of deadlines are fundamental because corrective actions can be taken to avoid or minimize the losses resulting from missing the deadline. In this work we propose a simple but reliable deadline missing prediction mechanism for mobile robots through the use of historical data and we use the Pioneer 3-DX robot for experiments and simulations, one of the most popular robots in academia.

Keywords: Deadline missing, historical data, mobile robots, prediction mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1788
6374 Thai Perception on Litecoin Value

Authors: Toby Gibbs, Suwaree Yordchim

Abstract:

This research analyzes factors affecting the success of Litecoin Value within Thailand and develops a guideline for selfreliance for effective business implementation. Samples in this study included 119 people through surveys. The results revealed four main factors affecting the success as follows: 1) Future Career training should be pursued in applied Litecoin development. 2) Didn't grasp the concept of a digital currency or see the benefit of a digital currency. 3) There is a great need to educate the next generation of learners on the benefits of Litecoin within the community. 4) A great majority didn't know what Litecoin was. The guideline for self-reliance planning consisted of 4 aspects: 1) Development planning: by arranging meet up groups to conduct further education on Litecoin and share solutions on adoption into every day usage. Local communities need to develop awareness of the usefulness of Litecoin and share the value of Litecoin among friends and family. 2) Computer Science and Business Management staff should develop skills to expand on the benefits of Litecoin within their departments. 3) Further research should be pursued on how Litecoin Value can improve business and tourism within Thailand. 4) Local communities should focus on developing Litecoin awareness by encouraging street vendors to accept Litecoin as another form of payment for services rendered.

Keywords: Litecoin, Mining, Confirmations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2772
6373 A Survey on Facial Feature Points Detection Techniques and Approaches

Authors: Rachid Ahdid, Khaddouj Taifi, Said Safi, Bouzid Manaut

Abstract:

Automatic detection of facial feature points plays an important role in applications such as facial feature tracking, human-machine interaction and face recognition. The majority of facial feature points detection methods using two-dimensional or three-dimensional data are covered in existing survey papers. In this article chosen approaches to the facial features detection have been gathered and described. This overview focuses on the class of researches exploiting facial feature points detection to represent facial surface for two-dimensional or three-dimensional face. In the conclusion, we discusses advantages and disadvantages of the presented algorithms.

Keywords: Facial feature points, face recognition, facial feature tracking, two-dimensional data, three-dimensional data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1659
6372 Evaluation of Model Evaluation Criterion for Software Development Effort Estimation

Authors: S. K. Pillai, M. K. Jeyakumar

Abstract:

Estimation of model parameters is necessary to predict the behavior of a system. Model parameters are estimated using optimization criteria. Most algorithms use historical data to estimate model parameters. The known target values (actual) and the output produced by the model are compared. The differences between the two form the basis to estimate the parameters. In order to compare different models developed using the same data different criteria are used. The data obtained for short scale projects are used here. We consider software effort estimation problem using radial basis function network. The accuracy comparison is made using various existing criteria for one and two predictors. Then, we propose a new criterion based on linear least squares for evaluation and compared the results of one and two predictors. We have considered another data set and evaluated prediction accuracy using the new criterion. The new criterion is easy to comprehend compared to single statistic. Although software effort estimation is considered, this method is applicable for any modeling and prediction.

Keywords: Software effort estimation, accuracy, Radial Basis Function, linear least squares.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012
6371 Solar Seawater Desalination Still with Seawater Preheater Using Efficient Heat Transfer Oil: Numerical Investigation and Data Verification

Authors: Ahmed N. Shmroukh, Gamal Tag Abdel-Jaber, Rashed D. Aldughpassi

Abstract:

The feasibility of improving the performance of the proposed solar still unit which operated in very hot climate is investigated numerically and verified with experimental data. This solar desalination unit with proposed auxiliary device as seawater preheating system using petrol based textherm oil was used to produce pure fresh water from seawater. The effective evaporation area of basin is about 1 m2. The unit was tested in two main operation modes which are normal and with seawater preheating system. The results showed that, there is good agreement between the theoretical data and the experimental data; this means that the numerical model can be accurately dependable for predicting the proposed solar still performance and design parameters. The results also showed that the fresh water productivity of the solar still in the modified preheating case which is higher than normal case, leads to an increase in productivity of 42%.

Keywords: Improving productivity, seawater desalination, solar stills, theoretical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 741
6370 The Necessity to Standardize Procedures of Providing Engineering Geological Data for Designing Road and Railway Tunneling Projects

Authors: Atefeh Saljooghi Khoshkar, Jafar Hassanpour

Abstract:

One of the main problems of design stage relating to many tunneling projects is the lack of an appropriate standard for the provision of engineering geological data in a predefined format. In particular, this is more reflected in highway and railroad tunnels projects in which there is a number of tunnels and different professional teams involved. In this regard, a comprehensive software needs to be designed using the accepted methods in order to help engineering geologists to prepare standard reports, which contain sufficient input data for the design stage. Regarding this necessity, an applied software has been designed using macro capabilities and Visual Basic programming language (VBA) through Microsoft Excel. In this software, all of the engineering geological input data, which are required for designing different parts of tunnels such as discontinuities properties, rock mass strength parameters, rock mass classification systems, boreability classification, the penetration rate and so forth can be calculated and reported in a standard format.

Keywords: Engineering geology, rock mass classification, rock mechanic, tunnel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73
6369 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan

Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid

Abstract:

In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.

Keywords: Data quality, null hypothesis, seismic lines, seismic reflection survey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 584
6368 Performance Evaluation of Neural Network Prediction for Data Prefetching in Embedded Applications

Authors: Sofien Chtourou, Mohamed Chtourou, Omar Hammami

Abstract:

Embedded systems need to respect stringent real time constraints. Various hardware components included in such systems such as cache memories exhibit variability and therefore affect execution time. Indeed, a cache memory access from an embedded microprocessor might result in a cache hit where the data is available or a cache miss and the data need to be fetched with an additional delay from an external memory. It is therefore highly desirable to predict future memory accesses during execution in order to appropriately prefetch data without incurring delays. In this paper, we evaluate the potential of several artificial neural networks for the prediction of instruction memory addresses. Neural network have the potential to tackle the nonlinear behavior observed in memory accesses during program execution and their demonstrated numerous hardware implementation emphasize this choice over traditional forecasting techniques for their inclusion in embedded systems. However, embedded applications execute millions of instructions and therefore millions of addresses to be predicted. This very challenging problem of neural network based prediction of large time series is approached in this paper by evaluating various neural network architectures based on the recurrent neural network paradigm with pre-processing based on the Self Organizing Map (SOM) classification technique.

Keywords: Address, data set, memory, prediction, recurrentneural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1659
6367 Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit

Authors: Ahmed Elrewainy

Abstract:

Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The existing pure materials “endmembers” in the scene share the spectra pixels with different amounts called “abundances”. Unmixing of the data cube is an important task to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The l1-norm optimization problem “basis pursuit” could be used as a sparsity-based approach to solve this unmixing problem where the endmembers is assumed to be sparse in an appropriate domain known as dictionary. This optimization problem is solved using proximal method “iterative thresholding”. The l1-norm basis pursuit optimization problem as a sparsity-based unmixing technique was used to unmix real and synthetic hyperspectral data cubes.

Keywords: Basis pursuit, blind source separation, hyperspectral imaging, spectral unmixing, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 817
6366 System for Monitoring Marine Turtles Using Unstructured Supplementary Service Data

Authors: Luís Pina

Abstract:

The conservation of marine biodiversity keeps ecosystems in balance and ensures the sustainable use of resources. In this context, technological resources have been used for monitoring marine species to allow biologists to obtain data in real-time. There are different mobile applications developed for data collection for monitoring purposes, but these systems are designed to be utilized only on third-generation (3G) phones or smartphones with Internet access and in rural parts of the developing countries, Internet services and smartphones are scarce. Thus, the objective of this work is to develop a system to monitor marine turtles using Unstructured Supplementary Service Data (USSD), which users can access through basic mobile phones. The system aims to improve the data collection mechanism and enhance the effectiveness of current systems in monitoring sea turtles using any type of mobile device without Internet access. The system will be able to report information related to the biological activities of marine turtles. Also, it will be used as a platform to assist marine conservation entities to receive reports of illegal sales of sea turtles. The system can also be utilized as an educational tool for communities, providing knowledge and allowing the inclusion of communities in the process of monitoring marine turtles. Therefore, this work may contribute with information to decision-making and implementation of contingency plans for marine conservation programs.

Keywords: GSM, marine biology, marine turtles, USSD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 906
6365 A New Version of Annotation Method with a XML-based Knowledge Base

Authors: Mohammad Yasrebi, Somayeh Khosravi

Abstract:

Machine-understandable data when strongly interlinked constitutes the basis for the SemanticWeb. Annotating web documents is one of the major techniques for creating metadata on the Web. Annotating websitexs defines the containing data in a form which is suitable for interpretation by machines. In this paper, we present a better and improved approach than previous [1] to annotate the texts of the websites depends on the knowledge base.

Keywords: Knowledge base, ontology, semantic annotation, XML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
6364 Managing the Baltic Sea Region Resilience: Prevention, Treatment Actions and Circular Economy

Authors: J. Burlakovs, Y. Jani, L. Grinberga, M. Kriipsalu, O. Anne, I. Grinfelde, W. Hogland

Abstract:

The worldwide future sustainable economies are oriented towards the sea: the maritime economy is becoming one of the strongest driving forces in many regions as population growth is the highest in coastal areas. For hundreds of years sea resources were depleted unsustainably by fishing, mining, transportation, tourism, and waste. European Sustainable Development Strategy is identifying and developing actions to enable the EU to achieve a continuous, long-term improvement of the quality of life through the creation of sustainable communities. The aim of this paper is to provide insight in Baltic Sea Region case studies on implemented actions on tourism industry waste and beach wrack management in coastal areas, hazardous contaminants and plastic flow treatment from waste, wastewaters and stormwaters. These projects mentioned in study promote successful prevention of contaminant flows to the sea environments and provide perspectives for creation of valuable new products from residuals for future circular economy are the step forward to green innovation winning streak.

Keywords: Resilience, hazardous waste, phytoremediation, water management, circular economy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 869
6363 Implementing Fault Tolerance with Proxy Signature on the Improvement of RSA System

Authors: H. El-Kamchouchi, Heba Gaber, Fatma Ahmed, Dalia H. El-Kamchouchi

Abstract:

Fault tolerance and data security are two important issues in modern communication systems. During the transmission of data between the sender and receiver, errors may occur frequently. Therefore, the sender must re-transmit the data to the receiver in order to correct these errors, which makes the system very feeble. To improve the scalability of the scheme, we present a proxy signature scheme with fault tolerance over an efficient and secure authenticated key agreement protocol based on the improved RSA system. Authenticated key agreement protocols have an important role in building a secure communications network between the two parties.

Keywords: Proxy signature, fault tolerance, improved RSA, key agreement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1241
6362 An Extensible Software Infrastructure for Computer Aided Custom Monitoring of Patients in Smart Homes

Authors: Ritwik Dutta, Marilyn Wolf

Abstract:

This paper describes the tradeoffs and the design from scratch of a self-contained, easy-to-use health dashboard software system that provides customizable data tracking for patients in smart homes. The system is made up of different software modules and comprises a front-end and a back-end component. Built with HTML, CSS, and JavaScript, the front-end allows adding users, logging into the system, selecting metrics, and specifying health goals. The backend consists of a NoSQL Mongo database, a Python script, and a SimpleHTTPServer written in Python. The database stores user profiles and health data in JSON format. The Python script makes use of the PyMongo driver library to query the database and displays formatted data as a daily snapshot of user health metrics against target goals. Any number of standard and custom metrics can be added to the system, and corresponding health data can be fed automatically, via sensor APIs or manually, as text or picture data files. A real-time METAR request API permits correlating weather data with patient health, and an advanced query system is implemented to allow trend analysis of selected health metrics over custom time intervals. Available on the GitHub repository system, the project is free to use for academic purposes of learning and experimenting, or practical purposes by building on it.

Keywords: Flask, Java, JavaScript, health monitoring, long term care, Mongo, Python, smart home, software engineering, webserver.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2112
6361 Comparison of Bayesian and Regression Schemes to Model Public Health Services

Authors: Sotirios Raptis

Abstract:

Bayesian reasoning (BR) or Linear (Auto) Regression (AR/LR) can predict different sources of data using priors or other data, and can link social service demands in cohorts, while their consideration in isolation (self-prediction) may lead to service misuse ignoring the context. The paper advocates that BR with Binomial (BD), or Normal (ND) models or raw data (.D) as probabilistic updates can be compared to AR/LR to link services in Scotland and reduce cost by sharing healthcare (HC) resources. Clustering, cross-correlation, along with BR, LR, AR can better predict demand. Insurance companies and policymakers can link such services, and examples include those offered to the elderly, and low-income people, smoking-related services linked to mental health services, or epidemiological weight in children. 22 service packs are used that are published by Public Health Services (PHS) Scotland and Scottish Government (SG) from 1981 to 2019, broken into 110 year series (factors), joined using LR, AR, BR. The Primary component analysis found 11 significant factors, while C-Means (CM) clustering gave five major clusters.

Keywords: Bayesian probability, cohorts, data frames, regression, services, prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 182
6360 Ensuring Data Security and Consistency in FTIMA - A Fault Tolerant Infrastructure for Mobile Agents

Authors: Umar Manzoor, Kiran Ijaz, Wajiha Shamim, Arshad Ali Shahid

Abstract:

Transaction management is one of the most crucial requirements for enterprise application development which often require concurrent access to distributed data shared amongst multiple application / nodes. Transactions guarantee the consistency of data records when multiple users or processes perform concurrent operations. Existing Fault Tolerance Infrastructure for Mobile Agents (FTIMA) provides a fault tolerant behavior in distributed transactions and uses multi-agent system for distributed transaction and processing. In the existing FTIMA architecture, data flows through the network and contains personal, private or confidential information. In banking transactions a minor change in the transaction can cause a great loss to the user. In this paper we have modified FTIMA architecture to ensure that the user request reaches the destination server securely and without any change. We have used triple DES for encryption/ decryption and MD5 algorithm for validity of message.

Keywords: Distributed Transaction, Security, Mobile Agents, FTIMA Architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504
6359 Simulation Data Management Approach for Developing Adaptronic Systems – The W-Model Methodology

Authors: Roland S. Nattermann, Reiner Anderl

Abstract:

Existing proceeding-models for the development of mechatronic systems provide a largely parallel action in the detailed development. This parallel approach is to take place also largely independent of one another in the various disciplines involved. An approach for a new proceeding-model provides a further development of existing models to use for the development of Adaptronic Systems. This approach is based on an intermediate integration and an abstract modeling of the adaptronic system. Based on this system-model a simulation of the global system behavior, due to external and internal factors or Forces is developed. For the intermediate integration a special data management system is used. According to the presented approach this data management system has a number of functions that are not part of the "normal" PDM functionality. Therefore a concept for a new data management system for the development of Adaptive system is presented in this paper. This concept divides the functions into six layers. In the first layer a system model is created, which divides the adaptronic system based on its components and the various technical disciplines. Moreover, the parameters and properties of the system are modeled and linked together with the requirements and the system model. The modeled parameters and properties result in a network which is analyzed in the second layer. From this analysis necessary adjustments to individual components for specific manipulation of the system behavior can be determined. The third layer contains an automatic abstract simulation of the system behavior. This simulation is a precursor for network analysis and serves as a filter. By the network analysis and simulation changes to system components are examined and necessary adjustments to other components are calculated. The other layers of the concept treat the automatic calculation of system reliability, the "normal" PDM-functionality and the integration of discipline-specific data into the system model. A prototypical implementation of an appropriate data management with the addition of an automatic system development is being implemented using the data management system ENOVIA SmarTeam V5 and the simulation system MATLAB.

Keywords: Adaptronic, Data-Management, LOEWE-CentreAdRIA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2337
6358 Classroom Teacher Candidates' Definitions and Beliefs about Technology Integration

Authors: Ahmet Baytak, Cenk Akbıyık

Abstract:

The purpose of this paper is to present teacher candidates- beliefs about technology integration in their field of study, which is classroom teaching in this case. The study was conducted among the first year students in college of education in Turkey. This study is based on both quantitative and qualitative data. For the quantitative data- Likert scale was used and for the qualitative data pattern matching was employed. The primary findings showed that students defined educational technology as technologies that improve learning with their visual, easily accessible, and productive features. They also believe these technologies could affect their future students- learning positively.

Keywords: Educational technology, classroom teacher candidates, technology integration, teacher education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1901
6357 The Application of Distributed Optical Strain Sensing to Measure Rock Bolt Deformation Subject to Bedding Shear

Authors: Thomas P. Roper, Brad Forbes, Jurij Karlovšek

Abstract:

Shear displacement along bedding defects is a well-recognised behaviour when tunnelling and mining in stratified rock. This deformation can affect the durability and integrity of installed rock bolts. In-situ monitoring of rock bolt deformation under bedding shear cannot be accurately derived from traditional strain gauge bolts as sensors are too large and spaced too far apart to accurately assess concentrated displacement along discrete defects. A possible solution to this is the use of fiber optic technologies developed for precision monitoring. Distributed Optic Sensor (DOS) embedded rock bolts were installed in a tunnel project with the aim of measuring the bolt deformation profile under significant shear displacements. This technology successfully measured the 3D strain distribution along the bolts when subjected to bedding shear and resolved the axial and lateral strain constituents in order to determine the deformational geometry of the bolts. The results are compared well with the current visual method for monitoring shear displacement using borescope holes, considering this method as suitable.

Keywords: Distributed optical strain sensing, geotechnical monitoring, rock bolt stain measurement, bedding shear displacement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 901
6356 Reconfigurable Autonomous Mini Robot Design using CPLD's

Authors: Aditya K, Dinesh P, Ramesh Bhakthavatchalu

Abstract:

This paper explains a project based learning method where autonomous mini-robots are developed for research, education and entertainment purposes. In case of remote systems wireless sensors are developed in critical areas, which would collect data at specific time intervals, send the data to the central wireless node based on certain preferred information would make decisions to turn on or off a switch or control unit. Such information transfers hardly sums up to a few bytes and hence low data rates would suffice for such implementations. As a robot is a multidisciplinary platform, the interfacing issues involved are discussed in this paper. The paper is mainly focused on power supply, grounding and decoupling issues.

Keywords: CPLD, power supply, decoupling, grounding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890
6355 Structuring and Visualizing Healthcare Claims Data Using Systems Architecture Methodology

Authors: Inas S. Khayal, Weiping Zhou, Jonathan Skinner

Abstract:

Healthcare delivery systems around the world are in crisis. The need to improve health outcomes while decreasing healthcare costs have led to an imminent call to action to transform the healthcare delivery system. While Bioinformatics and Biomedical Engineering have primarily focused on biological level data and biomedical technology, there is clear evidence of the importance of the delivery of care on patient outcomes. Classic singular decomposition approaches from reductionist science are not capable of explaining complex systems. Approaches and methods from systems science and systems engineering are utilized to structure healthcare delivery system data. Specifically, systems architecture is used to develop a multi-scale and multi-dimensional characterization of the healthcare delivery system, defined here as the Healthcare Delivery System Knowledge Base. This paper is the first to contribute a new method of structuring and visualizing a multi-dimensional and multi-scale healthcare delivery system using systems architecture in order to better understand healthcare delivery.

Keywords: Health informatics, systems thinking, systems architecture, healthcare delivery system, data analytics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1106