Search results for: data reliability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7947

Search results for: data reliability

6537 Impact of Solar Energy Based Power Grid for Future Prospective of Pakistan

Authors: Muhammd Usman Sardar, Mazhar Hussain Baloch, Muhammad Shahbaz Ahmad, Zahir Javed Paracha

Abstract:

Shortfall of electrical energy in Pakistan is a challenge adversely affecting its industrial output and social growth. As elsewhere, Pakistan derives its electrical energy from a number of conventional sources. The exhaustion of petroleum and conventional resources, the rising costs coupled with extremely adverse climatic effects are taking its toll especially on the under-developed countries like Pakistan. As alternate, renewable energy sources like hydropower, solar, wind, even bio-energy and a mix of some or all of them could provide a credible alternative to the conventional energy resources that would not only be cleaner but sustainable as well. As a model, solar energy-based power grid for the near future has been attempted to offset the energy shortfalls as a mix with our existing sustainable natural energy resources. An assessment of solar energy potential for electricity generation is being presented for fulfilling the energy demands with higher level of reliability and sustainability. This model is based on the premise that solar energy potential of Pakistan is not only reliable but also sustainable. This research estimates the present & future approaching renewable energy resource specially the impact of solar energy based power grid for mitigating energy shortage in Pakistan.

Keywords: Powergrid network, solar photovoltaic (SPV) setups, solar power generation, solar energy technology (SET).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3451
6536 Ensemble Approach for Predicting Student's Academic Performance

Authors: L. A. Muhammad, M. S. Argungu

Abstract:

Educational data mining (EDM) has recorded substantial considerations. Techniques of data mining in one way or the other have been proposed to dig out out-of-sight knowledge in educational data. The result of the study got assists academic institutions in further enhancing their process of learning and methods of passing knowledge to students. Consequently, the performance of students boasts and the educational products are by no doubt enhanced. This study adopted a student performance prediction model premised on techniques of data mining with Students' Essential Features (SEF). SEF are linked to the learner's interactivity with the e-learning management system. The performance of the student's predictive model is assessed by a set of classifiers, viz. Bayes Network, Logistic Regression, and Reduce Error Pruning Tree (REP). Consequently, ensemble methods of Bagging, Boosting, and Random Forest (RF) are applied to improve the performance of these single classifiers. The study reveals that the result shows a robust affinity between learners' behaviors and their academic attainment. Result from the study shows that the REP Tree and its ensemble record the highest accuracy of 83.33% using SEF. Hence, in terms of the Receiver Operating Curve (ROC), boosting method of REP Tree records 0.903, which is the best. This result further demonstrates the dependability of the proposed model.

Keywords: Ensemble, bagging, Random Forest, boosting, data mining, classifiers, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 765
6535 Field-Programmable Gate Array Based Tester for Protective Relay

Authors: H. Bentarzi, A. Zitouni

Abstract:

The reliability of the power grid depends on the successful operation of thousands of protective relays. The failure of one relay to operate as intended may lead the entire power grid to blackout. In fact, major power system failures during transient disturbances may be caused by unnecessary protective relay tripping rather than by the failure of a relay to operate. Adequate relay testing provides a first defense against false trips of the relay and hence improves power grid stability and prevents catastrophic bulk power system failures. The goal of this research project is to design and enhance the relay tester using a technology such as Field Programmable Gate Array (FPGA) card NI 7851. A PC based tester framework has been developed using Simulink power system model for generating signals under different conditions (faults or transient disturbances) and LabVIEW for developing the graphical user interface and configuring the FPGA. Besides, the interface system has been developed for outputting and amplifying the signals without distortion. These signals should be like the generated ones by the real power system and large enough for testing the relay’s functionality. The signals generated that have been displayed on the scope are satisfactory. Furthermore, the proposed testing system can be used for improving the performance of protective relay.

Keywords: Amplifier class D, FPGA, protective relay, tester.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 804
6534 A Survey on Facial Feature Points Detection Techniques and Approaches

Authors: Rachid Ahdid, Khaddouj Taifi, Said Safi, Bouzid Manaut

Abstract:

Automatic detection of facial feature points plays an important role in applications such as facial feature tracking, human-machine interaction and face recognition. The majority of facial feature points detection methods using two-dimensional or three-dimensional data are covered in existing survey papers. In this article chosen approaches to the facial features detection have been gathered and described. This overview focuses on the class of researches exploiting facial feature points detection to represent facial surface for two-dimensional or three-dimensional face. In the conclusion, we discusses advantages and disadvantages of the presented algorithms.

Keywords: Facial feature points, face recognition, facial feature tracking, two-dimensional data, three-dimensional data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
6533 Mining Network Data for Intrusion Detection through Naïve Bayesian with Clustering

Authors: Dewan Md. Farid, Nouria Harbi, Suman Ahmmed, Md. Zahidur Rahman, Chowdhury Mofizur Rahman

Abstract:

Network security attacks are the violation of information security policy that received much attention to the computational intelligence society in the last decades. Data mining has become a very useful technique for detecting network intrusions by extracting useful knowledge from large number of network data or logs. Naïve Bayesian classifier is one of the most popular data mining algorithm for classification, which provides an optimal way to predict the class of an unknown example. It has been tested that one set of probability derived from data is not good enough to have good classification rate. In this paper, we proposed a new learning algorithm for mining network logs to detect network intrusions through naïve Bayesian classifier, which first clusters the network logs into several groups based on similarity of logs, and then calculates the prior and conditional probabilities for each group of logs. For classifying a new log, the algorithm checks in which cluster the log belongs and then use that cluster-s probability set to classify the new log. We tested the performance of our proposed algorithm by employing KDD99 benchmark network intrusion detection dataset, and the experimental results proved that it improves detection rates as well as reduces false positives for different types of network intrusions.

Keywords: Clustering, detection rate, false positive, naïveBayesian classifier, network intrusion detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5536
6532 Evaluation of Model Evaluation Criterion for Software Development Effort Estimation

Authors: S. K. Pillai, M. K. Jeyakumar

Abstract:

Estimation of model parameters is necessary to predict the behavior of a system. Model parameters are estimated using optimization criteria. Most algorithms use historical data to estimate model parameters. The known target values (actual) and the output produced by the model are compared. The differences between the two form the basis to estimate the parameters. In order to compare different models developed using the same data different criteria are used. The data obtained for short scale projects are used here. We consider software effort estimation problem using radial basis function network. The accuracy comparison is made using various existing criteria for one and two predictors. Then, we propose a new criterion based on linear least squares for evaluation and compared the results of one and two predictors. We have considered another data set and evaluated prediction accuracy using the new criterion. The new criterion is easy to comprehend compared to single statistic. Although software effort estimation is considered, this method is applicable for any modeling and prediction.

Keywords: Software effort estimation, accuracy, Radial Basis Function, linear least squares.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2042
6531 Solar Seawater Desalination Still with Seawater Preheater Using Efficient Heat Transfer Oil: Numerical Investigation and Data Verification

Authors: Ahmed N. Shmroukh, Gamal Tag Abdel-Jaber, Rashed D. Aldughpassi

Abstract:

The feasibility of improving the performance of the proposed solar still unit which operated in very hot climate is investigated numerically and verified with experimental data. This solar desalination unit with proposed auxiliary device as seawater preheating system using petrol based textherm oil was used to produce pure fresh water from seawater. The effective evaporation area of basin is about 1 m2. The unit was tested in two main operation modes which are normal and with seawater preheating system. The results showed that, there is good agreement between the theoretical data and the experimental data; this means that the numerical model can be accurately dependable for predicting the proposed solar still performance and design parameters. The results also showed that the fresh water productivity of the solar still in the modified preheating case which is higher than normal case, leads to an increase in productivity of 42%.

Keywords: Improving productivity, seawater desalination, solar stills, theoretical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773
6530 The Necessity to Standardize Procedures of Providing Engineering Geological Data for Designing Road and Railway Tunneling Projects

Authors: Atefeh Saljooghi Khoshkar, Jafar Hassanpour

Abstract:

One of the main problems of design stage relating to many tunneling projects is the lack of an appropriate standard for the provision of engineering geological data in a predefined format. In particular, this is more reflected in highway and railroad tunnels projects in which there is a number of tunnels and different professional teams involved. In this regard, a comprehensive software needs to be designed using the accepted methods in order to help engineering geologists to prepare standard reports, which contain sufficient input data for the design stage. Regarding this necessity, an applied software has been designed using macro capabilities and Visual Basic programming language (VBA) through Microsoft Excel. In this software, all of the engineering geological input data, which are required for designing different parts of tunnels such as discontinuities properties, rock mass strength parameters, rock mass classification systems, boreability classification, the penetration rate and so forth can be calculated and reported in a standard format.

Keywords: Engineering geology, rock mass classification, rock mechanic, tunnel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 123
6529 Performance Evaluation of Neural Network Prediction for Data Prefetching in Embedded Applications

Authors: Sofien Chtourou, Mohamed Chtourou, Omar Hammami

Abstract:

Embedded systems need to respect stringent real time constraints. Various hardware components included in such systems such as cache memories exhibit variability and therefore affect execution time. Indeed, a cache memory access from an embedded microprocessor might result in a cache hit where the data is available or a cache miss and the data need to be fetched with an additional delay from an external memory. It is therefore highly desirable to predict future memory accesses during execution in order to appropriately prefetch data without incurring delays. In this paper, we evaluate the potential of several artificial neural networks for the prediction of instruction memory addresses. Neural network have the potential to tackle the nonlinear behavior observed in memory accesses during program execution and their demonstrated numerous hardware implementation emphasize this choice over traditional forecasting techniques for their inclusion in embedded systems. However, embedded applications execute millions of instructions and therefore millions of addresses to be predicted. This very challenging problem of neural network based prediction of large time series is approached in this paper by evaluating various neural network architectures based on the recurrent neural network paradigm with pre-processing based on the Self Organizing Map (SOM) classification technique.

Keywords: Address, data set, memory, prediction, recurrentneural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675
6528 Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit

Authors: Ahmed Elrewainy

Abstract:

Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The existing pure materials “endmembers” in the scene share the spectra pixels with different amounts called “abundances”. Unmixing of the data cube is an important task to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The l1-norm optimization problem “basis pursuit” could be used as a sparsity-based approach to solve this unmixing problem where the endmembers is assumed to be sparse in an appropriate domain known as dictionary. This optimization problem is solved using proximal method “iterative thresholding”. The l1-norm basis pursuit optimization problem as a sparsity-based unmixing technique was used to unmix real and synthetic hyperspectral data cubes.

Keywords: Basis pursuit, blind source separation, hyperspectral imaging, spectral unmixing, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 837
6527 System for Monitoring Marine Turtles Using Unstructured Supplementary Service Data

Authors: Luís Pina

Abstract:

The conservation of marine biodiversity keeps ecosystems in balance and ensures the sustainable use of resources. In this context, technological resources have been used for monitoring marine species to allow biologists to obtain data in real-time. There are different mobile applications developed for data collection for monitoring purposes, but these systems are designed to be utilized only on third-generation (3G) phones or smartphones with Internet access and in rural parts of the developing countries, Internet services and smartphones are scarce. Thus, the objective of this work is to develop a system to monitor marine turtles using Unstructured Supplementary Service Data (USSD), which users can access through basic mobile phones. The system aims to improve the data collection mechanism and enhance the effectiveness of current systems in monitoring sea turtles using any type of mobile device without Internet access. The system will be able to report information related to the biological activities of marine turtles. Also, it will be used as a platform to assist marine conservation entities to receive reports of illegal sales of sea turtles. The system can also be utilized as an educational tool for communities, providing knowledge and allowing the inclusion of communities in the process of monitoring marine turtles. Therefore, this work may contribute with information to decision-making and implementation of contingency plans for marine conservation programs.

Keywords: GSM, marine biology, marine turtles, USSD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 931
6526 A New Version of Annotation Method with a XML-based Knowledge Base

Authors: Mohammad Yasrebi, Somayeh Khosravi

Abstract:

Machine-understandable data when strongly interlinked constitutes the basis for the SemanticWeb. Annotating web documents is one of the major techniques for creating metadata on the Web. Annotating websitexs defines the containing data in a form which is suitable for interpretation by machines. In this paper, we present a better and improved approach than previous [1] to annotate the texts of the websites depends on the knowledge base.

Keywords: Knowledge base, ontology, semantic annotation, XML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
6525 Performance Analysis in 5th Generation Massive Multiple-Input-Multiple-Output Systems

Authors: Jihad S. Daba, Jean-Pierre Dubois, Georges El Soury

Abstract:

Fifth generation wireless networks guarantee significant capacity enhancement to suit more clients and services at higher information rates with better reliability while consuming less power. The deployment of massive multiple-input-multiple-output technology guarantees broadband wireless networks with the use of base station antenna arrays to serve a large number of users on the same frequency and time-slot channels. In this work, we evaluate the performance of massive multiple-input-multiple-output systems (MIMO) systems in 5th generation cellular networks in terms of capacity and bit error rate. Several cases were considered and analyzed to compare the performance of massive MIMO systems while varying the number of antennas at both transmitting and receiving ends. We found that, unlike classical MIMO systems, reducing the number of transmit antennas while increasing the number of antennas at the receiver end provides a better solution to performance enhancement. In addition, enhanced orthogonal frequency division multiplexing and beam division multiple access schemes further improve the performance of massive MIMO systems and make them more reliable.

Keywords: Beam division multiple access, D2D communication, enhanced OFDM, fifth generation broadband, massive MIMO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 749
6524 Implementing Fault Tolerance with Proxy Signature on the Improvement of RSA System

Authors: H. El-Kamchouchi, Heba Gaber, Fatma Ahmed, Dalia H. El-Kamchouchi

Abstract:

Fault tolerance and data security are two important issues in modern communication systems. During the transmission of data between the sender and receiver, errors may occur frequently. Therefore, the sender must re-transmit the data to the receiver in order to correct these errors, which makes the system very feeble. To improve the scalability of the scheme, we present a proxy signature scheme with fault tolerance over an efficient and secure authenticated key agreement protocol based on the improved RSA system. Authenticated key agreement protocols have an important role in building a secure communications network between the two parties.

Keywords: Proxy signature, fault tolerance, improved RSA, key agreement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1258
6523 A Distance Function for Data with Missing Values and Its Application

Authors: Loai AbdAllah, Ilan Shimshoni

Abstract:

Missing values in data are common in real world applications. Since the performance of many data mining algorithms depend critically on it being given a good metric over the input space, we decided in this paper to define a distance function for unlabeled datasets with missing values. We use the Bhattacharyya distance, which measures the similarity of two probability distributions, to define our new distance function. According to this distance, the distance between two points without missing attributes values is simply the Mahalanobis distance. When on the other hand there is a missing value of one of the coordinates, the distance is computed according to the distribution of the missing coordinate. Our distance is general and can be used as part of any algorithm that computes the distance between data points. Because its performance depends strongly on the chosen distance measure, we opted for the k nearest neighbor classifier to evaluate its ability to accurately reflect object similarity. We experimented on standard numerical datasets from the UCI repository from different fields. On these datasets we simulated missing values and compared the performance of the kNN classifier using our distance to other three basic methods. Our  experiments show that kNN using our distance function outperforms the kNN using other methods. Moreover, the runtime performance of our method is only slightly higher than the other methods.

Keywords: Missing values, Distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2752
6522 Exploring Performance-Based Music Attributes for Stylometric Analysis

Authors: Abdellghani Bellaachia, Edward Jimenez

Abstract:

Music Information Retrieval (MIR) and modern data mining techniques are applied to identify style markers in midi music for stylometric analysis and author attribution. Over 100 attributes are extracted from a library of 2830 songs then mined using supervised learning data mining techniques. Two attributes are identified that provide high informational gain. These attributes are then used as style markers to predict authorship. Using these style markers the authors are able to correctly distinguish songs written by the Beatles from those that were not with a precision and accuracy of over 98 per cent. The identification of these style markers as well as the architecture for this research provides a foundation for future research in musical stylometry.

Keywords: Music Information Retrieval, Music Data Mining, Stylometry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680
6521 An Extensible Software Infrastructure for Computer Aided Custom Monitoring of Patients in Smart Homes

Authors: Ritwik Dutta, Marilyn Wolf

Abstract:

This paper describes the tradeoffs and the design from scratch of a self-contained, easy-to-use health dashboard software system that provides customizable data tracking for patients in smart homes. The system is made up of different software modules and comprises a front-end and a back-end component. Built with HTML, CSS, and JavaScript, the front-end allows adding users, logging into the system, selecting metrics, and specifying health goals. The backend consists of a NoSQL Mongo database, a Python script, and a SimpleHTTPServer written in Python. The database stores user profiles and health data in JSON format. The Python script makes use of the PyMongo driver library to query the database and displays formatted data as a daily snapshot of user health metrics against target goals. Any number of standard and custom metrics can be added to the system, and corresponding health data can be fed automatically, via sensor APIs or manually, as text or picture data files. A real-time METAR request API permits correlating weather data with patient health, and an advanced query system is implemented to allow trend analysis of selected health metrics over custom time intervals. Available on the GitHub repository system, the project is free to use for academic purposes of learning and experimenting, or practical purposes by building on it.

Keywords: Flask, Java, JavaScript, health monitoring, long term care, Mongo, Python, smart home, software engineering, webserver.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134
6520 Comparison of Bayesian and Regression Schemes to Model Public Health Services

Authors: Sotirios Raptis

Abstract:

Bayesian reasoning (BR) or Linear (Auto) Regression (AR/LR) can predict different sources of data using priors or other data, and can link social service demands in cohorts, while their consideration in isolation (self-prediction) may lead to service misuse ignoring the context. The paper advocates that BR with Binomial (BD), or Normal (ND) models or raw data (.D) as probabilistic updates can be compared to AR/LR to link services in Scotland and reduce cost by sharing healthcare (HC) resources. Clustering, cross-correlation, along with BR, LR, AR can better predict demand. Insurance companies and policymakers can link such services, and examples include those offered to the elderly, and low-income people, smoking-related services linked to mental health services, or epidemiological weight in children. 22 service packs are used that are published by Public Health Services (PHS) Scotland and Scottish Government (SG) from 1981 to 2019, broken into 110 year series (factors), joined using LR, AR, BR. The Primary component analysis found 11 significant factors, while C-Means (CM) clustering gave five major clusters.

Keywords: Bayesian probability, cohorts, data frames, regression, services, prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 225
6519 Ensuring Data Security and Consistency in FTIMA - A Fault Tolerant Infrastructure for Mobile Agents

Authors: Umar Manzoor, Kiran Ijaz, Wajiha Shamim, Arshad Ali Shahid

Abstract:

Transaction management is one of the most crucial requirements for enterprise application development which often require concurrent access to distributed data shared amongst multiple application / nodes. Transactions guarantee the consistency of data records when multiple users or processes perform concurrent operations. Existing Fault Tolerance Infrastructure for Mobile Agents (FTIMA) provides a fault tolerant behavior in distributed transactions and uses multi-agent system for distributed transaction and processing. In the existing FTIMA architecture, data flows through the network and contains personal, private or confidential information. In banking transactions a minor change in the transaction can cause a great loss to the user. In this paper we have modified FTIMA architecture to ensure that the user request reaches the destination server securely and without any change. We have used triple DES for encryption/ decryption and MD5 algorithm for validity of message.

Keywords: Distributed Transaction, Security, Mobile Agents, FTIMA Architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525
6518 Classroom Teacher Candidates' Definitions and Beliefs about Technology Integration

Authors: Ahmet Baytak, Cenk Akbıyık

Abstract:

The purpose of this paper is to present teacher candidates- beliefs about technology integration in their field of study, which is classroom teaching in this case. The study was conducted among the first year students in college of education in Turkey. This study is based on both quantitative and qualitative data. For the quantitative data- Likert scale was used and for the qualitative data pattern matching was employed. The primary findings showed that students defined educational technology as technologies that improve learning with their visual, easily accessible, and productive features. They also believe these technologies could affect their future students- learning positively.

Keywords: Educational technology, classroom teacher candidates, technology integration, teacher education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1926
6517 Reconfigurable Autonomous Mini Robot Design using CPLD's

Authors: Aditya K, Dinesh P, Ramesh Bhakthavatchalu

Abstract:

This paper explains a project based learning method where autonomous mini-robots are developed for research, education and entertainment purposes. In case of remote systems wireless sensors are developed in critical areas, which would collect data at specific time intervals, send the data to the central wireless node based on certain preferred information would make decisions to turn on or off a switch or control unit. Such information transfers hardly sums up to a few bytes and hence low data rates would suffice for such implementations. As a robot is a multidisciplinary platform, the interfacing issues involved are discussed in this paper. The paper is mainly focused on power supply, grounding and decoupling issues.

Keywords: CPLD, power supply, decoupling, grounding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1912
6516 Development of Software Complex for Digitalization of Enterprise Activities

Authors: G. T. Balakayeva, K. K. Nurlybayeva, M. B. Zhanuzakov

Abstract:

In the proposed work, we have developed software and designed a software architecture for the implementation of enterprise business processes. The proposed software has a multi-level architecture using a domain-specific tool. The developed architecture is a guarantor of the availability, reliability and security of the system and the implementation of business processes, which are the basis for effective enterprise management. Automating business processes, automating the algorithmic stages of an enterprise, developing optimal algorithms for managing activities, controlling and monitoring, reducing risks and improving results help organizations achieve strategic goals quickly and efficiently. The software described in this article can connect to the corporate information system via two methods: a desktop client and a web client. With an appeal to the application server, the desktop client program connects to the information system on the company's work PCs over a local network. Outside the organization, the user can interact with the information system via a web browser, which acts as a web client and connects to a web server. The developed software consists of several integrated modules that share resources and interact with each other through an API. The following technology stack was used during development: Node js, React js, MongoDB, Ngnix, Cloud Technologies, Python.

Keywords: Algorithms, document processing, automation, integrated modules, software architecture, software design, information system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207
6515 Structuring and Visualizing Healthcare Claims Data Using Systems Architecture Methodology

Authors: Inas S. Khayal, Weiping Zhou, Jonathan Skinner

Abstract:

Healthcare delivery systems around the world are in crisis. The need to improve health outcomes while decreasing healthcare costs have led to an imminent call to action to transform the healthcare delivery system. While Bioinformatics and Biomedical Engineering have primarily focused on biological level data and biomedical technology, there is clear evidence of the importance of the delivery of care on patient outcomes. Classic singular decomposition approaches from reductionist science are not capable of explaining complex systems. Approaches and methods from systems science and systems engineering are utilized to structure healthcare delivery system data. Specifically, systems architecture is used to develop a multi-scale and multi-dimensional characterization of the healthcare delivery system, defined here as the Healthcare Delivery System Knowledge Base. This paper is the first to contribute a new method of structuring and visualizing a multi-dimensional and multi-scale healthcare delivery system using systems architecture in order to better understand healthcare delivery.

Keywords: Health informatics, systems thinking, systems architecture, healthcare delivery system, data analytics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1138
6514 The Effect of Maximum Strain on Fatigue Life Prediction for Natural Rubber Material

Authors: Chang S. Woo, Hyun S. Park, Wan D. Kim

Abstract:

Fatigue life prediction and evaluation are the key technologies to assure the safety and reliability of automotive rubber components. The objective of this study is to develop the fatigue analysis process for vulcanized rubber components, which is applicable to predict fatigue life at initial product design step. Fatigue life prediction methodology of vulcanized natural rubber was proposed by incorporating the finite element analysis and fatigue damage parameter of maximum strain appearing at the critical location determined from fatigue test. In order to develop an appropriate fatigue damage parameter of the rubber material, a series of displacement controlled fatigue test was conducted using threedimensional dumbbell specimen with different levels of mean displacement. It was shown that the maximum strain was a proper damage parameter, taking the mean displacement effects into account. Nonlinear finite element analyses of three-dimensional dumbbell specimens were performed based on a hyper-elastic material model determined from the uni-axial tension, equi-biaxial tension and planar test. Fatigue analysis procedure employed in this study could be used approximately for the fatigue design.

Keywords: Rubber, Material test, Finite element analysis, Strain, Fatigue test, Fatigue life prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4663
6513 Service Quality vs. Customer Satisfaction: Perspectives of Visitors to a Public University Library

Authors: Norazah Mohd Suki, Norbayah Mohd Suki

Abstract:

This study proposes a conceptual model and empirically tests the relationships between customers and librarians (i.e. tangibles, responsiveness, assurance, reliability and empathy) with a dependent variable (customer satisfaction) regarding library services. The SERVQUAL instrument was administered to 100 respondents which comprises of staff and students at a public higher learning institution in the Federal Territory of Labuan, Malaysia. They were public university library users. Results revealed that all service quality dimensions tested were significant and influenced customer satisfaction of visitors to a public university library. Assurance is the most important factor that influences customer satisfaction with the services rendered by the librarian. It is imperative for the library management to take note that the top five service attributes that gained greatest attention from library visitors- perspective includes employee willingness to help customers, availability of customer representatives online for response to queries, library staff actively and promptly provide services, signs in the building are clear and library staff are friendly and courteous. This study provides valuable results concerning the determinants of the service quality and customer satisfaction of public university library services from the users' perspective.

Keywords: Service Quality, Customer Satisfaction, SERVQUAL Model, Multiple Regression Analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4908
6512 Building an Integrated Relational Database from Swiss Nutrition National Survey and Swiss Health Datasets for Data Mining Purposes

Authors: Ilona Mewes, Helena Jenzer, Farshideh Einsele

Abstract:

Objective: The objective of the study was to integrate two big databases from Swiss nutrition national survey (menuCH) and Swiss health national survey 2012 for data mining purposes. Each database has a demographic base data. An integrated Swiss database is built to later discover critical food consumption patterns linked with lifestyle diseases known to be strongly tied with food consumption. Design: Swiss nutrition national survey (menuCH) with approx. 2000 respondents from two different surveys, one by Phone and the other by questionnaire along with Swiss health national survey 2012 with 21500 respondents were pre-processed, cleaned and finally integrated to a unique relational database. Results: The result of this study is an integrated relational database from the Swiss nutritional and health databases.

Keywords: Health informatics, data mining, nutritional and health databases, nutritional and chronical databases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726
6511 Applications for Additive Manufacturing Technology for Reducing the Weight of Body Parts of Gas Turbine Engines

Authors: Liubov A. Magerramova, Mikhail A. Petrov, Vladimir V. Isakov, Liana A. Shcherbinina, Suren G. Gukasyan, Daniil V. Povalyukhin, Olga G. Klimova-Korsmik, Darya V. Volosevich

Abstract:

Aircraft engines are developing along the path of increasing resource, strength, reliability, and safety. The building of gas turbine engine body parts is a complex design and technological task. Particularly complex in the design and manufacturing are the casings of the input stages of helicopter gearboxes and central drives of aircraft engines. Traditional technologies, such as precision casting or isothermal forging, are characterized by significant limitations in parts production. For parts like housing, additive technologies guarantee spatial freedom and limitless or flexible design. This article presents the results of computational and experimental studies. These investigations justify the applicability of additive technologies (AT) to reduce the weight of aircraft housing gearbox parts by up to 32%. This is possible due to geometrical optimization compared to the classical, less flexible manufacturing methods and as-casted aircraft parts with over-insured values of safety factors. Using an example of the body of the input stage of an aircraft gearbox, visualization of the layer-by-layer manufacturing of a part based on thermal deformation was demonstrated.

Keywords: Additive technologies, gas turbine engines, geometric optimization, weight reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 123
6510 Estimation of Geotechnical Parameters by Comparing Monitoring Data with Numerical Results: Case Study of Arash–Esfandiar-Niayesh Under-Passing Tunnel, Africa Tunnel, Tehran, Iran

Authors: Aliakbar Golshani, Seyyed Mehdi Poorhashemi, Mahsa Gharizadeh

Abstract:

The under passing tunnels are strongly influenced by the soils around. There are some complexities in the specification of real soil behavior, owing to the fact that lots of uncertainties exist in soil properties, and additionally, inappropriate soil constitutive models. Such mentioned factors may cause incompatible settlements in numerical analysis with the obtained values in actual construction. This paper aims to report a case study on a specific tunnel constructed by NATM. The tunnel has a depth of 11.4 m, height of 12.2 m, and width of 14.4 m with 2.5 lanes. The numerical modeling was based on a 2D finite element program. The soil material behavior was modeled by hardening soil model. According to the field observations, the numerical estimated settlement at the ground surface was approximately four times more than the measured one, after the entire installation of the initial lining, indicating that some unknown factors affect the values. Consequently, the geotechnical parameters are accurately revised by a numerical back-analysis using laboratory and field test data and based on the obtained monitoring data. The obtained result confirms that typically, the soil parameters are conservatively low-estimated. And additionally, the constitutive models cannot be applied properly for all soil conditions.

Keywords: NATM tunnel, initial lining, field test data, laboratory test data, monitoring data, numerical back-analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
6509 Application of Neural Networks in Financial Data Mining

Authors: Defu Zhang, Qingshan Jiang, Xin Li

Abstract:

This paper deals with the application of a well-known neural network technique, multilayer back-propagation (BP) neural network, in financial data mining. A modified neural network forecasting model is presented, and an intelligent mining system is developed. The system can forecast the buying and selling signs according to the prediction of future trends to stock market, and provide decision-making for stock investors. The simulation result of seven years to Shanghai Composite Index shows that the return achieved by this mining system is about three times as large as that achieved by the buy and hold strategy, so it is advantageous to apply neural networks to forecast financial time series, the different investors could benefit from it.

Keywords: Data mining, neural network, stock forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3590
6508 Adaptive Kalman Filter for Noise Estimation and Identification with Bayesian Approach

Authors: Farhad Asadi, S. Hossein Sadati

Abstract:

Bayesian approach can be used for parameter identification and extraction in state space models and its ability for analyzing sequence of data in dynamical system is proved in different literatures. In this paper, adaptive Kalman filter with Bayesian approach for identification of variances in measurement parameter noise is developed. Next, it is applied for estimation of the dynamical state and measurement data in discrete linear dynamical system. This algorithm at each step time estimates noise variance in measurement noise and state of system with Kalman filter. Next, approximation is designed at each step separately and consequently sufficient statistics of the state and noise variances are computed with a fixed-point iteration of an adaptive Kalman filter. Different simulations are applied for showing the influence of noise variance in measurement data on algorithm. Firstly, the effect of noise variance and its distribution on detection and identification performance is simulated in Kalman filter without Bayesian formulation. Then, simulation is applied to adaptive Kalman filter with the ability of noise variance tracking in measurement data. In these simulations, the influence of noise distribution of measurement data in each step is estimated, and true variance of data is obtained by algorithm and is compared in different scenarios. Afterwards, one typical modeling of nonlinear state space model with inducing noise measurement is simulated by this approach. Finally, the performance and the important limitations of this algorithm in these simulations are explained. 

Keywords: adaptive filtering, Bayesian approach Kalman filtering approach, variance tracking

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 619