Search results for: heterogeneous massive data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25995

Search results for: heterogeneous massive data

24735 Conception of a Predictive Maintenance System for Forest Harvesters from Multiple Data Sources

Authors: Lazlo Fauth, Andreas Ligocki

Abstract:

For cost-effective use of harvesters, expensive repairs and unplanned downtimes must be reduced as far as possible. The predictive detection of failing systems and the calculation of intelligent service intervals, necessary to avoid these factors, require in-depth knowledge of the machines' behavior. Such know-how needs permanent monitoring of the machine state from different technical perspectives. In this paper, three approaches will be presented as they are currently pursued in the publicly funded project PreForst at Ostfalia University of Applied Sciences. These include the intelligent linking of workshop and service data, sensors on the harvester, and a special online hydraulic oil condition monitoring system. Furthermore the paper shows potentials as well as challenges for the use of these data in the conception of a predictive maintenance system.

Keywords: predictive maintenance, condition monitoring, forest harvesting, forest engineering, oil data, hydraulic data

Procedia PDF Downloads 145
24734 Sampled-Data Control for Fuel Cell Systems

Authors: H. Y. Jung, Ju H. Park, S. M. Lee

Abstract:

A sampled-data controller is presented for solid oxide fuel cell systems which is expressed by a sector bounded nonlinear model. The sector bounded nonlinear systems, which have a feedback connection with a linear dynamical system and nonlinearity satisfying certain sector type constraints. Also, the sampled-data control scheme is very useful since it is possible to handle digital controller and increasing research efforts have been devoted to sampled-data control systems with the development of modern high-speed computers. The proposed control law is obtained by solving a convex problem satisfying several linear matrix inequalities. Simulation results are given to show the effectiveness of the proposed design method.

Keywords: sampled-data control, fuel cell, linear matrix inequalities, nonlinear control

Procedia PDF Downloads 565
24733 How Western Donors Allocate Official Development Assistance: New Evidence From a Natural Language Processing Approach

Authors: Daniel Benson, Yundan Gong, Hannah Kirk

Abstract:

Advancement in national language processing techniques has led to increased data processing speeds, and reduced the need for cumbersome, manual data processing that is often required when processing data from multilateral organizations for specific purposes. As such, using named entity recognition (NER) modeling and the Organisation of Economically Developed Countries (OECD) Creditor Reporting System database, we present the first geotagged dataset of OECD donor Official Development Assistance (ODA) projects on a global, subnational basis. Our resulting data contains 52,086 ODA projects geocoded to subnational locations across 115 countries, worth a combined $87.9bn. This represents the first global, OECD donor ODA project database with geocoded projects. We use this new data to revisit old questions of how ‘well’ donors allocate ODA to the developing world. This understanding is imperative for policymakers seeking to improve ODA effectiveness.

Keywords: international aid, geocoding, subnational data, natural language processing, machine learning

Procedia PDF Downloads 79
24732 Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano

Authors: Guo Wenyu, Qu Youli

Abstract:

A practical and simple self-indexing data structure, Partitioned Elias-Fano (PEF) - Compressed Suffix Arrays (CSA), is built in linear time for the CSA based on PEF indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, Ferragina and Manzini implementation (FMI) and Sad-CSA on different type and size files in Pizza & Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count, and locates time except for the evenly distributed data such as proteins data. The observations of the experiments are that the distribution of the φ is more important than the alphabet size on the compression ratio. Unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.

Keywords: compressed suffix array, self-indexing, partitioned Elias-Fano, PEF-CSA

Procedia PDF Downloads 252
24731 Data, Digital Identity and Antitrust Law: An Exploratory Study of Facebook’s Novi Digital Wallet

Authors: Wanjiku Karanja

Abstract:

Facebook has monopoly power in the social networking market. It has grown and entrenched its monopoly power through the capture of its users’ data value chains. However, antitrust law’s consumer welfare roots have prevented it from effectively addressing the role of data capture in Facebook’s market dominance. These regulatory blind spots are augmented in Facebook’s proposed Diem cryptocurrency project and its Novi Digital wallet. Novi, which is Diem’s digital identity component, shall enable Facebook to collect an unprecedented volume of consumer data. Consequently, Novi has seismic implications on internet identity as the network effects of Facebook’s large user base could establish it as the de facto internet identity layer. Moreover, the large tracts of data Facebook shall collect through Novi shall further entrench Facebook's market power. As such, the attendant lock-in effects of this project shall be very difficult to reverse. Urgent regulatory action is therefore required to prevent this expansion of Facebook’s data resources and monopoly power. This research thus highlights the importance of data capture to competition and market health in the social networking industry. It utilizes interviews with key experts to empirically interrogate the impact of Facebook’s data capture and control of its users’ data value chains on its market power. This inquiry is contextualized against Novi’s expansive effect on Facebook’s data value chains. It thus addresses the novel antitrust issues arising at the nexus of Facebook’s monopoly power and the privacy of its users’ data. It also explores the impact of platform design principles, specifically data portability and data portability, in mitigating Facebook’s anti-competitive practices. As such, this study finds that Facebook is a powerful monopoly that dominates the social media industry to the detriment of potential competitors. Facebook derives its power from its size, annexure of the consumer data value chain, and control of its users’ social graphs. Additionally, the platform design principles of data interoperability and data portability are not a panacea to restoring competition in the social networking market. Their success depends on the establishment of robust technical standards and regulatory frameworks.

Keywords: antitrust law, data protection law, data portability, data interoperability, digital identity, Facebook

Procedia PDF Downloads 123
24730 Application of Cube IQ Software to Optimize Heterogeneous Packing Products in Logistics Cargo and Minimize Transportation Cost

Authors: Muhammad Ganda Wiratama

Abstract:

XYZ company is one of the upstream chemical companies that produce chemical products such as NaOH, HCl, NaClO, VCM, EDC, and PVC for downstream companies. The products are shipped by land using trucks and sea lanes using ship mode. Especially for solid products such as flake caustic soda (F-NaOH) and PVC resin, the products are sold in loose bag packing and palletize packing (packed in pallet). The focus of this study is to increase the number of items that can be loaded in pallet packaging on the company's logistics vehicle. This is very difficult because on this packaging, the dimensions or size of the material to be loaded become larger and certainly much heavier than the loose bag packing. This factor causes the arrangement and handling of materials in the mode of transportation more difficult. In this case, it is difficult to load a different type of volume packing pallet dimension in one truck or container. By using the Cube-IQ software, it is hoped that the planning of stuffing activity material by pallet can become easier in optimizing the existing space with various possible combinations of possibilities. In addition, the output of this software can also be used as a reference for operators in the material handling include the order and orientation of materials contained in the truck or container. The more optimal contents of logistics cargo, then transportation costs can also be minimized.

Keywords: loading activity, container loading, palletize product, simulation

Procedia PDF Downloads 298
24729 Pectin Degrading Enzyme: Entrapment of Pectinase Using Different Synthetic and Non-Synthetic Polymers for Continuous Degradation of Pectin Polymer

Authors: Haneef Ur Rehman, Afsheen Aman, Abdul Hameed Baloch, Shah Ali Ul Qader

Abstract:

Pectinase is a heterogeneous group of enzymes that catalyze the hydrolysis of pectin substances and widely has been used in food and textile industries. In current study, pectinase from B. licheniformis KIBGE-IB21 was immobilized within different polymers (calcium alginate beads, polyacrylamide gel and agar-agar matrix) to enhance its catalytic properties. Polyacrylamide gel was found to be most promising one and gave maximum (89%) immobilization yield. While less immobilization yield was observed in case of calcium alginate beads that only retained 46 % activity. The reaction time for maximum pectinolytic activity was increased from 5.0 to 10 minutes after immobilization. The temperature of pectinase for maximum enzyme activity was increased from 45 °C to 50 °C and 55 °C when it was immobilized within agar-agar and calcium alginate beads, respectively. The optimum pH of pectinase didn’t alter when it was immobilized within polyacrylamide gel and calcium alginate beads, but in case of agar-agar it was changed from pH 10 to pH 9.0. Thermal stability of pectinase was improved after immobilization and immobilized pectinase showed higher toleration against different temperatures as compared to free enzyme. It can be concluded that the entrapment is a simple, single step and promising procedure to immobilized pectinase within different synthetic and non-synthetic polymers and enhanced its catalytic properties.

Keywords: pectinase, characterization immobilization, polyacrylamide, agar-agar, calcium alginate beads

Procedia PDF Downloads 606
24728 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data

Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca

Abstract:

In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.

Keywords: citizen science, data quality filtering, species distribution models, trait profiles

Procedia PDF Downloads 203
24727 Data Quality Enhancement with String Length Distribution

Authors: Qi Xiu, Hiromu Hota, Yohsuke Ishii, Takuya Oda

Abstract:

Recently, collectable manufacturing data are rapidly increasing. On the other hand, mega recall is getting serious as a social problem. Under such circumstances, there are increasing needs for preventing mega recalls by defect analysis such as root cause analysis and abnormal detection utilizing manufacturing data. However, the time to classify strings in manufacturing data by traditional method is too long to meet requirement of quick defect analysis. Therefore, we present String Length Distribution Classification method (SLDC) to correctly classify strings in a short time. This method learns character features, especially string length distribution from Product ID, Machine ID in BOM and asset list. By applying the proposal to strings in actual manufacturing data, we verified that the classification time of strings can be reduced by 80%. As a result, it can be estimated that the requirement of quick defect analysis can be fulfilled.

Keywords: string classification, data quality, feature selection, probability distribution, string length

Procedia PDF Downloads 318
24726 Preparation and Characterization of a Nickel-Based Catalyst Supported by Silica Promoted by Cerium for the Methane Steam Reforming Reaction

Authors: Ali Zazi, Ouiza Cherifi

Abstract:

Natural gas currently represents a raw material of choice for the manufacture of a wide range of chemical products via synthesis gas, among the routes of transformation of methane into synthesis gas The reaction of the oxidation of methane by gas vapor 'water. This work focuses on the study of the effect of cerieum on the nickel-based catalyst supported by silica for the methane vapor reforming reaction, with a variation of certain parameters of the reaction. The reaction temperature, the H₂O / CH₄ ratio and the flow rate of the reaction mixture (CH₄-H₂O). Two catalysts were prepared by impregnation of Degussa silica with a solution of nickel nitrates and a solution of cerium nitrates [Ni (NO₃) 2 6H₂O and Ce (NO₃) 3 6H₂O] so as to obtain the 1.5% nickel concentrations. For both catalysts and plus 1% cerium for the second catalyst. These Catalysts have been characterized by physical and chemical analysis techniques: BET technique, Atomic Absorption, IR Spectroscopy, X-ray diffraction. These characterizations indicated that the nitrates had impregnated the silica. And that the NiO and Ce₂O3 phases are present and Ni°(after reaction). The BET surface of the silica decreases without being affected. The catalytic tests carried out on the two catalysts for the steam reforming reactions show that the addition of cerium to the nickel improves the catalytic performances of the nickel. And that these performances also depend on the parameters of the reaction, namely the temperature, the rate of the reaction mixture, and the ratio (H₂O / CH₄).

Keywords: heterogeneous catalysis, steam reforming, Methane, Nickel, Cerium, synthesis gas, hydrogen

Procedia PDF Downloads 165
24725 Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data

Authors: Salam Khalifa, Naveed Ahmed

Abstract:

We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.

Keywords: 3D video, 3D animation, RGB-D video, temporally coherent 3D animation

Procedia PDF Downloads 373
24724 Determining Abnomal Behaviors in UAV Robots for Trajectory Control in Teleoperation

Authors: Kiwon Yeom

Abstract:

Change points are abrupt variations in a data sequence. Detection of change points is useful in modeling, analyzing, and predicting time series in application areas such as robotics and teleoperation. In this paper, a change point is defined to be a discontinuity in one of its derivatives. This paper presents a reliable method for detecting discontinuities within a three-dimensional trajectory data. The problem of determining one or more discontinuities is considered in regular and irregular trajectory data from teleoperation. We examine the geometric detection algorithm and illustrate the use of the method on real data examples.

Keywords: change point, discontinuity, teleoperation, abrupt variation

Procedia PDF Downloads 167
24723 Multidimensional Item Response Theory Models for Practical Application in Large Tests Designed to Measure Multiple Constructs

Authors: Maria Fernanda Ordoñez Martinez, Alvaro Mauricio Montenegro

Abstract:

This work presents a statistical methodology for measuring and founding constructs in Latent Semantic Analysis. This approach uses the qualities of Factor Analysis in binary data with interpretations present on Item Response Theory. More precisely, we propose initially reducing dimensionality with specific use of Principal Component Analysis for the linguistic data and then, producing axes of groups made from a clustering analysis of the semantic data. This approach allows the user to give meaning to previous clusters and found the real latent structure presented by data. The methodology is applied in a set of real semantic data presenting impressive results for the coherence, speed and precision.

Keywords: semantic analysis, factorial analysis, dimension reduction, penalized logistic regression

Procedia PDF Downloads 443
24722 Analysis of Production Forecasting in Unconventional Gas Resources Development Using Machine Learning and Data-Driven Approach

Authors: Dongkwon Han, Sangho Kim, Sunil Kwon

Abstract:

Unconventional gas resources have dramatically changed the future energy landscape. Unlike conventional gas resources, the key challenges in unconventional gas have been the requirement that applies to advanced approaches for production forecasting due to uncertainty and complexity of fluid flow. In this study, artificial neural network (ANN) model which integrates machine learning and data-driven approach was developed to predict productivity in shale gas. The database of 129 wells of Eagle Ford shale basin used for testing and training of the ANN model. The Input data related to hydraulic fracturing, well completion and productivity of shale gas were selected and the output data is a cumulative production. The performance of the ANN using all data sets, clustering and variables importance (VI) models were compared in the mean absolute percentage error (MAPE). ANN model using all data sets, clustering, and VI were obtained as 44.22%, 10.08% (cluster 1), 5.26% (cluster 2), 6.35%(cluster 3), and 32.23% (ANN VI), 23.19% (SVM VI), respectively. The results showed that the pre-trained ANN model provides more accurate results than the ANN model using all data sets.

Keywords: unconventional gas, artificial neural network, machine learning, clustering, variables importance

Procedia PDF Downloads 196
24721 Procedure Model for Data-Driven Decision Support Regarding the Integration of Renewable Energies into Industrial Energy Management

Authors: M. Graus, K. Westhoff, X. Xu

Abstract:

The climate change causes a change in all aspects of society. While the expansion of renewable energies proceeds, industry could not be convinced based on general studies about the potential of demand side management to reinforce smart grid considerations in their operational business. In this article, a procedure model for a case-specific data-driven decision support for industrial energy management based on a holistic data analytics approach is presented. The model is executed on the example of the strategic decision problem, to integrate the aspect of renewable energies into industrial energy management. This question is induced due to considerations of changing the electricity contract model from a standard rate to volatile energy prices corresponding to the energy spot market which is increasingly more affected by renewable energies. The procedure model corresponds to a data analytics process consisting on a data model, analysis, simulation and optimization step. This procedure will help to quantify the potentials of sustainable production concepts based on the data from a factory. The model is validated with data from a printer in analogy to a simple production machine. The overall goal is to establish smart grid principles for industry via the transformation from knowledge-driven to data-driven decisions within manufacturing companies.

Keywords: data analytics, green production, industrial energy management, optimization, renewable energies, simulation

Procedia PDF Downloads 435
24720 Dissimilarity-Based Coloring for Symbolic and Multivariate Data Visualization

Authors: K. Umbleja, M. Ichino, H. Yaguchi

Abstract:

In this paper, we propose a coloring method for multivariate data visualization by using parallel coordinates based on dissimilarity and tree structure information gathered during hierarchical clustering. The proposed method is an extension for proximity-based coloring that suffers from a few undesired side effects if hierarchical tree structure is not balanced tree. We describe the algorithm by assigning colors based on dissimilarity information, show the application of proposed method on three commonly used datasets, and compare the results with proximity-based coloring. We found our proposed method to be especially beneficial for symbolic data visualization where many individual objects have already been aggregated into a single symbolic object.

Keywords: data visualization, dissimilarity-based coloring, proximity-based coloring, symbolic data

Procedia PDF Downloads 170
24719 Teachers’ Perceptions of the Negative Impact of Tobephobia on Their Emotions and Job Satisfaction

Authors: Prakash Singh

Abstract:

The aim of this study was to investigate the extent of teachers’ experiences of tobephobia (TBP) in their heterogeneous classrooms and what impact this had on their emotions and job satisfaction. The expansive and continuously changing demands for quality and equal education for all students in educational organisations that have limited resources connotes that the negative effects of TBP cannot be simply ignored as being non-existent in the educational environment. As this quantitative study reveals, teachers disliking their job with low expectations, lack of motivation in their workplace and pessimism, result in their low self-esteem. When there is pessimism in the workplace, then the employees’ self-esteem will inevitably be low, as pointed out by 97.1% of the respondents in this study. Self-esteem is a reliable indicator of whether employees are happy or not in their jobs and the majority of the respondents in this study agreed that their experiences of TBP negatively impacted on their self-esteem. Hence, this exploratory study strongly indicates that productivity in the workplace is directly linked to the employees’ expectations, self-confidence and their self-esteem. It is therefore inconceivable for teachers to be productive in their regular classrooms if their genuine professional concerns, anxieties, and curriculum challenges are not adequately addressed. This empirical study contributes to our knowledge on TBP because it clearly outlines some of the teaching problems that we are grappling with and constantly experience in our schools in this century. Therefore, it is imperative that the tobephobic experiences of teachers are not merely documented, but appropriately addressed with relevant action by every stakeholder associated with education so that our teachers’ emotions and job satisfaction needs are fully taken care of.

Keywords: demotivated teachers' pessimism, low expectations of teachers' job satisfaction, self-esteem, tobephobia

Procedia PDF Downloads 233
24718 The Impact of Data Science on Geography: A Review

Authors: Roberto Machado

Abstract:

We conducted a systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology, analyzing 2,996 studies and synthesizing 41 of them to explore the evolution of data science and its integration into geography. By employing optimization algorithms, we accelerated the review process, significantly enhancing the efficiency and precision of literature selection. Our findings indicate that data science has developed over five decades, facing challenges such as the diversified integration of data and the need for advanced statistical and computational skills. In geography, the integration of data science underscores the importance of interdisciplinary collaboration and methodological innovation. Techniques like large-scale spatial data analysis and predictive algorithms show promise in natural disaster management and transportation route optimization, enabling faster and more effective responses. These advancements highlight the transformative potential of data science in geography, providing tools and methodologies to address complex spatial problems. The relevance of this study lies in the use of optimization algorithms in systematic reviews and the demonstrated need for deeper integration of data science into geography. Key contributions include identifying specific challenges in combining diverse spatial data and the necessity for advanced computational skills. Examples of connections between these two fields encompass significant improvements in natural disaster management and transportation efficiency, promoting more effective and sustainable environmental solutions with a positive societal impact.

Keywords: data science, geography, systematic review, optimization algorithms, supervised learning

Procedia PDF Downloads 30
24717 Developing Structured Sizing Systems for Manufacturing Ready-Made Garments of Indian Females Using Decision Tree-Based Data Mining

Authors: Hina Kausher, Sangita Srivastava

Abstract:

In India, there is a lack of standard, systematic sizing approach for producing readymade garments. Garments manufacturing companies use their own created size tables by modifying international sizing charts of ready-made garments. The purpose of this study is to tabulate the anthropometric data which covers the variety of figure proportions in both height and girth. 3,000 data has been collected by an anthropometric survey undertaken over females between the ages of 16 to 80 years from some states of India to produce the sizing system suitable for clothing manufacture and retailing. This data is used for the statistical analysis of body measurements, the formulation of sizing systems and body measurements tables. Factor analysis technique is used to filter the control body dimensions from a large number of variables. Decision tree-based data mining is used to cluster the data. The standard and structured sizing system can facilitate pattern grading and garment production. Moreover, it can exceed buying ratios and upgrade size allocations to retail segments.

Keywords: anthropometric data, data mining, decision tree, garments manufacturing, sizing systems, ready-made garments

Procedia PDF Downloads 134
24716 A Framework on Data and Remote Sensing for Humanitarian Logistics

Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini

Abstract:

Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.

Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making

Procedia PDF Downloads 379
24715 Facility Data Model as Integration and Interoperability Platform

Authors: Nikola Tomasevic, Marko Batic, Sanja Vranes

Abstract:

Emerging Semantic Web technologies can be seen as the next step in evolution of the intelligent facility management systems. Particularly, this considers increased usage of open source and/or standardized concepts for data classification and semantic interpretation. To deliver such facility management systems, providing the comprehensive integration and interoperability platform in from of the facility data model is a prerequisite. In this paper, one of the possible modelling approaches to provide such integrative facility data model which was based on the ontology modelling concept was presented. Complete ontology development process, starting from the input data acquisition, ontology concepts definition and finally ontology concepts population, was described. At the beginning, the core facility ontology was developed representing the generic facility infrastructure comprised of the common facility concepts relevant from the facility management perspective. To develop the data model of a specific facility infrastructure, first extension and then population of the core facility ontology was performed. For the development of the full-blown facility data models, Malpensa and Fiumicino airports in Italy, two major European air-traffic hubs, were chosen as a test-bed platform. Furthermore, the way how these ontology models supported the integration and interoperability of the overall airport energy management system was analyzed as well.

Keywords: airport ontology, energy management, facility data model, ontology modeling

Procedia PDF Downloads 448
24714 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 105
24713 Assessing the Resilience to Economic Shocks of the Households in Bistekville 2, Quezon City, Philippines

Authors: Maria Elisa B. Manuel

Abstract:

The Philippine housing sector is bracing challenges with the massive housing backlog and the adamant cycle of relocation, resettlement and returns to the cities of informal settler families due to the vast inaccessibility of necessities and opportunities in the past off-city housing projects. Bistekville 2 has been established as a model socialized housing project by utilizing government partnerships with private developers and individuals in the first in-city and onsite resettlement effort in the country. The study looked into the resilience of the residents to idiosyncratic economic shocks by analyzing their vulnerabilities, assets and coping strategies. The study formulated an economic resilience framework to identify how these factors that interact to build the household’s capacity to positively adapt to sudden expenses in their households. The framework is supplemented with a scale that presents the proximity of the household to resilience by identifying through its indicators whether the households are in the level of subsistence, coping, adaptive or transformative. Survey interviews were conducted with 91 households from Bistekville 2 on the components that have been identified by the framework that was processed with qualitative and quantitative processes. The study has found that the households are highly vulnerable due to their family composition and other conditions such as unhealthy loans, inconsistent amortization payment. Along with their high vulnerability, the households have inadequate strategies to anticipate shocks and primarily react to the shock. This has led to the conclusion that the households do not reflect resilience to idiosyncratic economic shocks and are still at the level of coping.

Keywords: idiosyncratic economic shocks, socialized housing, economic resilience, economic vulnerability, adaptive capacity

Procedia PDF Downloads 151
24712 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines

Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma

Abstract:

Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.

Keywords: support vector mechanism (SVM), machine learning (ML), support vector machines (SVM), department of transportation (DFT)

Procedia PDF Downloads 274
24711 Preserving Digital Arabic Text Integrity Using Blockchain Technology

Authors: Zineb Touati Hamad, Mohamed Ridda Laouar, Issam Bendib

Abstract:

With the massive development of technology today, the Arabic language has gained a prominent position among the languages most used for writing articles, expressing opinions, and also for citing in many websites, defying its growing sensitivity in terms of structure, language skills, diacritics, writing methods, etc. In the context of the spread of the Arabic language, the Holy Quran represents the most prevalent Arabic text today in many applications and websites for citation purposes or for the reading and learning rituals. The Quranic verses / surahs are published quickly and without cost, which may cause great concern to ensure the safety of the content from tampering and alteration. To protect the content of texts from distortion, it is necessary to refer to the original database and conduct a comparison process to extract the percentage of distortion. The disadvantage of this method is that it takes time, in addition to the lack of any guarantee on the integrity of the database itself as it belongs to one central party. Blockchain technology today represents the best way to maintain immutable content. Blockchain is a distributed database that stores information in blocks linked to each other through encryption, where the modification of each block can be easily known. To exploit these advantages, we seek in this paper to justify the use of this technique in preserving the integrity of Arabic texts sensitive to change by building a decentralized framework to authenticate and verify the integrity of the digital Quranic verses/surahs spread on websites.

Keywords: arabic text, authentication, blockchain, integrity, quran, verification

Procedia PDF Downloads 164
24710 A Relational Data Base for Radiation Therapy

Authors: Raffaele Danilo Esposito, Domingo Planes Meseguer, Maria Del Pilar Dorado Rodriguez

Abstract:

As far as we know, it is still unavailable a commercial solution which would allow to manage, openly and configurable up to user needs, the huge amount of data generated in a modern Radiation Oncology Department. Currently, available information management systems are mainly focused on Record & Verify and clinical data, and only to a small extent on physical data. Thus, results in a partial and limited use of the actually available information. In the present work we describe the implementation at our department of a centralized information management system based on a web server. Our system manages both information generated during patient planning and treatment, and information of general interest for the whole department (i.e. treatment protocols, quality assurance protocols etc.). Our objective it to be able to analyze in a simple and efficient way all the available data and thus to obtain quantitative evaluations of our treatments. This would allow us to improve our work flow and protocols. To this end we have implemented a relational data base which would allow us to use in a practical and efficient way all the available information. As always we only use license free software.

Keywords: information management system, radiation oncology, medical physics, free software

Procedia PDF Downloads 242
24709 A Study of Safety of Data Storage Devices of Graduate Students at Suan Sunandha Rajabhat University

Authors: Komol Phaisarn, Natcha Wattanaprapa

Abstract:

This research is a survey research with an objective to study the safety of data storage devices of graduate students of academic year 2013, Suan Sunandha Rajabhat University. Data were collected by questionnaire on the safety of data storage devices according to CIA principle. A sample size of 81 was drawn from population by purposive sampling method. The results show that most of the graduate students of academic year 2013 at Suan Sunandha Rajabhat University use handy drive to store their data and the safety level of the devices is at good level.

Keywords: security, safety, storage devices, graduate students

Procedia PDF Downloads 353
24708 Comprehensive Investigation of Solving Analytical of Nonlinear Differential Equations at Chemical Reactions to Design of Reactors by New Method “AGM”

Authors: Mohammadreza Akbari, Pooya Soleimani Besheli, Reza khalili, Sara Akbari, Davood Domiri Ganji

Abstract:

In this symposium, our aims are accuracy, capabilities and power at solving of the complicate non-linear differential at the reaction chemical in the catalyst reactor (heterogeneous reaction). Our purpose is to enhance the ability of solving the mentioned nonlinear differential equations at chemical engineering and similar issues with a simple and innovative approach which entitled ‘’Akbari-Ganji's Method’’ or ‘’AGM’’. In this paper we solve many examples of nonlinear differential equations of chemical reactions and its investigate. The chemical reactor with the energy changing (non-isotherm) in two reactors of mixed and plug are separately studied and the nonlinear differential equations obtained from the reaction behavior in these systems are solved by a new method. Practically, the reactions with the energy changing (heat or cold) have an important effect on designing and function of the reactors. This means that possibility of reaching the optimal conditions of operation for the maximum conversion depending on nonlinear nature of the reaction velocity toward temperature, results in the complexity of the operation in the reactor. In this case, the differential equation set which governs the reactors can be obtained simultaneous solution of mass equilibrium and energy and temperature changing at concentration.

Keywords: new method (AGM), nonlinear differential equation, tubular and mixed reactors, catalyst bed

Procedia PDF Downloads 383
24707 Cationic Surfactants Influence on the Fouling Phenomenon Control in Ultrafiltration of Latex Contaminated Water and Wastewater

Authors: Amira Abdelrasoul, Huu Doan, Ali Lohi

Abstract:

The goal of the present study was to minimize the ultrafiltration fouling of latex effluent using Cetyltrimethyl ammonium bromide (CTAB) as a cationic surfactant. Hydrophilic Polysulfone and Ultrafilic flat heterogeneous membranes, with MWCO of 60,000 and 100,000, respectively, as well as hydrophobic Polyvinylidene Difluoride with MWCO of 100,000, were used under a constant flow rate and cross-flow mode in ultrafiltration of latex solution. In addition, a Polycarbonate flat membrane with uniform pore size of 0.05 µm was also used. The effect of CTAB on the latex particle size distribution was investigated at different concentrations, various treatment times, and diverse agitation duration. The effects of CTAB on the zeta potential of latex particles and membrane surfaces were also investigated. The results obtained indicated that the particle size distribution of treated latex effluent showed noticeable shifts in the peaks toward a larger size range due to the aggregation of particles. As a consequence, the mass of fouling contributing to pore blocking and the irreversible fouling were significantly reduced. The optimum results occurred with the addition of CTAB at the critical micelle concentration of 0.36 g/L for 10 minutes with minimal agitation. Higher stirring rate had a negative effect on membrane fouling minimization.

Keywords: cationic surfactant, latex particles, membrane fouling, ultrafiltration, zeta potential

Procedia PDF Downloads 528
24706 Simulation of a Cost Model Response Requests for Replication in Data Grid Environment

Authors: Kaddi Mohammed, A. Benatiallah, D. Benatiallah

Abstract:

Data grid is a technology that has full emergence of new challenges, such as the heterogeneity and availability of various resources and geographically distributed, fast data access, minimizing latency and fault tolerance. Researchers interested in this technology address the problems of the various systems related to the industry such as task scheduling, load balancing and replication. The latter is an effective solution to achieve good performance in terms of data access and grid resources and better availability of data cost. In a system with duplication, a coherence protocol is used to impose some degree of synchronization between the various copies and impose some order on updates. In this project, we present an approach for placing replicas to minimize the cost of response of requests to read or write, and we implement our model in a simulation environment. The placement techniques are based on a cost model which depends on several factors, such as bandwidth, data size and storage nodes.

Keywords: response time, query, consistency, bandwidth, storage capacity, CERN

Procedia PDF Downloads 271