Search results for: link data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25669

Search results for: link data

24679 Strengthening Islamic Banking Customer Behavioral Intention through Value and Commitment

Authors: Mornay Roberts-Lombard

Abstract:

Consumers’ perceptions of value are crucial to ensuring their future commitment and behavioral intentions. As a result, service providers, such as Islamic banks, must provide their customers with products and services that are regarded as valuable, stimulating, collaborative, and competent. Therefore, the value provided to customers must meet or surpass their expectations, which can drive customers’ commitment (affective and calculative) and eventually favorably impact their future behavioral intentions. Consequently, Islamic banks in South Africa, as a growing African market, need to obtain a better understanding of the variables that impact Islamic banking customers’ value perceptions and how these impact their future behavioral intentions. Furthermore, it is necessary to investigate how customers’ perceived value perceptions impact their affective and calculative commitment and how the latter impact their future behavioral intentions. The purpose of this study is to bridge these gaps in knowledge, as the competitiveness of the Islamic banking industry in South Africa requires a deeper understanding of the aforementioned relationships. The study was exploratory and quantitative in nature, and data was collected from 250 Islamic banking customers using self-administered questionnaires. These banking customers resided in the Gauteng province of South Africa. Exploratory factor analysis, Pearson’s coefficient analysis, and multiple regression analysis were applied to measure the proposed hypotheses developed for the study. This research will aid Islamic banks in the country in potentially strengthening customers’ future commitment (affective and calculative) and positively impact their future behavioral intentions. The findings of the study established that service quality has a significant and positive impact on perceived value. Moreover, it was determined that perceived value has a favorable and considerable impact on affective and calculative commitment, while calculative commitment has a beneficial impact on behavioral intention. The research informs Islamic banks of the importance of service engagement in driving customer perceived value, which stimulates the future affective and calculative commitment of Islamic bank customers in an emerging market context. Finally, the study proposes guidelines for Islamic banks to develop an enhanced understanding of the factors that impact the perceived value-commitment-behavioral intention link in a competitive Islamic banking market in South Africa.

Keywords: perceived value, affective commitment, calculative commitment, behavioural intention

Procedia PDF Downloads 73
24678 Users’ Information Disclosure Determinants in Social Networking Sites: A Systematic Literature Review

Authors: Wajdan Al Malwi, Karen Renaud, Lewis Mackenzie

Abstract:

The privacy paradox describes a phenomenon whereby there is no connection between stated privacy concerns and privacy behaviours. We need to understand the underlying reasons for this paradox if we are to help users to preserve their privacy more effectively. In particular, the Social Networking System (SNS) domain offers a rich area of investigation due to the risks of unwise information disclosure decisions. Our study thus aims to untangle the complicated nature and underlying mechanisms of online privacy-related decisions in SNSs. In this paper, we report on the findings of a Systematic Literature Review (SLR) that revealed a number of factors that are likely to influence online privacy decisions. Our deductive analysis approach was informed by Communicative Privacy Management (CPM) theory. We uncovered a lack of clarity around privacy attitudes and their link to behaviours, which makes it challenging to design privacy-protecting SNS platforms and to craft legislation to ensure that users’ privacy is preserved.

Keywords: privacy paradox, self-disclosure, privacy attitude, privacy behavior, social networking sites

Procedia PDF Downloads 151
24677 Sampled-Data Control for Fuel Cell Systems

Authors: H. Y. Jung, Ju H. Park, S. M. Lee

Abstract:

A sampled-data controller is presented for solid oxide fuel cell systems which is expressed by a sector bounded nonlinear model. The sector bounded nonlinear systems, which have a feedback connection with a linear dynamical system and nonlinearity satisfying certain sector type constraints. Also, the sampled-data control scheme is very useful since it is possible to handle digital controller and increasing research efforts have been devoted to sampled-data control systems with the development of modern high-speed computers. The proposed control law is obtained by solving a convex problem satisfying several linear matrix inequalities. Simulation results are given to show the effectiveness of the proposed design method.

Keywords: sampled-data control, fuel cell, linear matrix inequalities, nonlinear control

Procedia PDF Downloads 564
24676 How Western Donors Allocate Official Development Assistance: New Evidence From a Natural Language Processing Approach

Authors: Daniel Benson, Yundan Gong, Hannah Kirk

Abstract:

Advancement in national language processing techniques has led to increased data processing speeds, and reduced the need for cumbersome, manual data processing that is often required when processing data from multilateral organizations for specific purposes. As such, using named entity recognition (NER) modeling and the Organisation of Economically Developed Countries (OECD) Creditor Reporting System database, we present the first geotagged dataset of OECD donor Official Development Assistance (ODA) projects on a global, subnational basis. Our resulting data contains 52,086 ODA projects geocoded to subnational locations across 115 countries, worth a combined $87.9bn. This represents the first global, OECD donor ODA project database with geocoded projects. We use this new data to revisit old questions of how ‘well’ donors allocate ODA to the developing world. This understanding is imperative for policymakers seeking to improve ODA effectiveness.

Keywords: international aid, geocoding, subnational data, natural language processing, machine learning

Procedia PDF Downloads 73
24675 Implementation in Python of a Method to Transform One-Dimensional Signals in Graphs

Authors: Luis Andrey Fajardo Fajardo

Abstract:

We are immersed in complex systems. The human brain, the galaxies, the snowflakes are examples of complex systems. An area of interest in Complex systems is the chaos theory. This revolutionary field of science presents different ways of study than determinism and reductionism. Here is where in junction with the Nonlinear DSP, chaos theory offer valuable techniques that establish a link between time series and complex theory in terms of complex networks, so that, the study of signals can be explored from the graph theory. Recently, some people had purposed a method to transform time series in graphs, but no one had developed a suitable implementation in Python with signals extracted from Chaotic Systems or Complex systems. That’s why the implementation in Python of an existing method to transform one dimensional chaotic signals from time domain to graph domain and some measures that may reveal information not extracted in the time domain is proposed.

Keywords: Python, complex systems, graph theory, dynamical systems

Procedia PDF Downloads 506
24674 Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano

Authors: Guo Wenyu, Qu Youli

Abstract:

A practical and simple self-indexing data structure, Partitioned Elias-Fano (PEF) - Compressed Suffix Arrays (CSA), is built in linear time for the CSA based on PEF indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, Ferragina and Manzini implementation (FMI) and Sad-CSA on different type and size files in Pizza & Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count, and locates time except for the evenly distributed data such as proteins data. The observations of the experiments are that the distribution of the φ is more important than the alphabet size on the compression ratio. Unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.

Keywords: compressed suffix array, self-indexing, partitioned Elias-Fano, PEF-CSA

Procedia PDF Downloads 249
24673 Data, Digital Identity and Antitrust Law: An Exploratory Study of Facebook’s Novi Digital Wallet

Authors: Wanjiku Karanja

Abstract:

Facebook has monopoly power in the social networking market. It has grown and entrenched its monopoly power through the capture of its users’ data value chains. However, antitrust law’s consumer welfare roots have prevented it from effectively addressing the role of data capture in Facebook’s market dominance. These regulatory blind spots are augmented in Facebook’s proposed Diem cryptocurrency project and its Novi Digital wallet. Novi, which is Diem’s digital identity component, shall enable Facebook to collect an unprecedented volume of consumer data. Consequently, Novi has seismic implications on internet identity as the network effects of Facebook’s large user base could establish it as the de facto internet identity layer. Moreover, the large tracts of data Facebook shall collect through Novi shall further entrench Facebook's market power. As such, the attendant lock-in effects of this project shall be very difficult to reverse. Urgent regulatory action is therefore required to prevent this expansion of Facebook’s data resources and monopoly power. This research thus highlights the importance of data capture to competition and market health in the social networking industry. It utilizes interviews with key experts to empirically interrogate the impact of Facebook’s data capture and control of its users’ data value chains on its market power. This inquiry is contextualized against Novi’s expansive effect on Facebook’s data value chains. It thus addresses the novel antitrust issues arising at the nexus of Facebook’s monopoly power and the privacy of its users’ data. It also explores the impact of platform design principles, specifically data portability and data portability, in mitigating Facebook’s anti-competitive practices. As such, this study finds that Facebook is a powerful monopoly that dominates the social media industry to the detriment of potential competitors. Facebook derives its power from its size, annexure of the consumer data value chain, and control of its users’ social graphs. Additionally, the platform design principles of data interoperability and data portability are not a panacea to restoring competition in the social networking market. Their success depends on the establishment of robust technical standards and regulatory frameworks.

Keywords: antitrust law, data protection law, data portability, data interoperability, digital identity, Facebook

Procedia PDF Downloads 121
24672 Implementation of MPPT Algorithm for Grid Connected PV Module with IC and P&O Method

Authors: Arvind Kumar, Manoj Kumar, Dattatraya H. Nagaraj, Amanpreet Singh, Jayanthi Prattapati

Abstract:

In recent years, the use of renewable energy resources instead of pollutant fossil fuels and other forms has increased. Photovoltaic generation is becoming increasingly important as a renewable resource since it does not cause in fuel costs, pollution, maintenance, and emitting noise compared with other alternatives used in power applications. In this paper, Perturb and Observe and Incremental Conductance methods are used to improve energy conversion efficiency under different environmental conditions. PI controllers are used to control easily DC-link voltage, active and reactive currents. The whole system is simulated under standard climatic conditions (1000 W/m2, 250C) in MATLAB and the irradiance is varied from 1000 W/m2 to 300 W/m2. The use of PI controller makes it easy to directly control the power of the grid connected PV system. Finally the validity of the system will be verified through the simulations in MATLAB/Simulink environment.

Keywords: incremental conductance algorithm, modeling of PV panel, perturb and observe algorithm, photovoltaic system and simulation results

Procedia PDF Downloads 505
24671 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data

Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca

Abstract:

In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.

Keywords: citizen science, data quality filtering, species distribution models, trait profiles

Procedia PDF Downloads 196
24670 Data Quality Enhancement with String Length Distribution

Authors: Qi Xiu, Hiromu Hota, Yohsuke Ishii, Takuya Oda

Abstract:

Recently, collectable manufacturing data are rapidly increasing. On the other hand, mega recall is getting serious as a social problem. Under such circumstances, there are increasing needs for preventing mega recalls by defect analysis such as root cause analysis and abnormal detection utilizing manufacturing data. However, the time to classify strings in manufacturing data by traditional method is too long to meet requirement of quick defect analysis. Therefore, we present String Length Distribution Classification method (SLDC) to correctly classify strings in a short time. This method learns character features, especially string length distribution from Product ID, Machine ID in BOM and asset list. By applying the proposal to strings in actual manufacturing data, we verified that the classification time of strings can be reduced by 80%. As a result, it can be estimated that the requirement of quick defect analysis can be fulfilled.

Keywords: string classification, data quality, feature selection, probability distribution, string length

Procedia PDF Downloads 315
24669 Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data

Authors: Salam Khalifa, Naveed Ahmed

Abstract:

We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.

Keywords: 3D video, 3D animation, RGB-D video, temporally coherent 3D animation

Procedia PDF Downloads 369
24668 Determining Abnomal Behaviors in UAV Robots for Trajectory Control in Teleoperation

Authors: Kiwon Yeom

Abstract:

Change points are abrupt variations in a data sequence. Detection of change points is useful in modeling, analyzing, and predicting time series in application areas such as robotics and teleoperation. In this paper, a change point is defined to be a discontinuity in one of its derivatives. This paper presents a reliable method for detecting discontinuities within a three-dimensional trajectory data. The problem of determining one or more discontinuities is considered in regular and irregular trajectory data from teleoperation. We examine the geometric detection algorithm and illustrate the use of the method on real data examples.

Keywords: change point, discontinuity, teleoperation, abrupt variation

Procedia PDF Downloads 162
24667 Epigenetic Modifying Potential of Dietary Spices: Link to Cure Complex Diseases

Authors: Jeena Gupta

Abstract:

In the today’s world of pharmaceutical products, one should not forget the healing properties of inexpensive food materials especially spices. They are known to possess hidden pharmaceutical ingredients, imparting them the qualities of being anti-microbial, anti-oxidant, anti-inflammatory and anti-carcinogenic. Further aberrant epigenetic regulatory mechanisms like DNA methylation, histone modifications or altered microRNA expression patterns, which regulates gene expression without changing DNA sequence, contribute significantly in the development of various diseases. Changing lifestyles and diets exert their effect by influencing these epigenetic mechanisms which are thus the target of dietary phytochemicals. Bioactive components of plants have been in use since ages but their potential to reverse epigenetic alterations and prevention against diseases is yet to be explored. Spices being rich repositories of many bioactive constituents are responsible for providing them unique aroma and taste. Some spices like curcuma and garlic have been well evaluated for their epigenetic regulatory potential, but for others, it is largely unknown. We have evaluated the biological activity of phyto-active components of Fennel, Cardamom and Fenugreek by in silico molecular modeling, in vitro and in vivo studies. Ligand-based similarity studies were conducted to identify structurally similar compounds to understand their biological phenomenon. The database searching has been done by using Fenchone from fennel, Sabinene from cardamom and protodioscin from fenugreek as a query molecule in the different small molecule databases. Moreover, the results of the database searching exhibited that these compounds are having potential binding with the different targets found in the Protein Data Bank. Further in addition to being epigenetic modifiers, in vitro study had demonstrated the antimicrobial, antifungal, antioxidant and cytotoxicity protective effects of Fenchone, Sabinene and Protodioscin. To best of our knowledge, such type of studies facilitate the target fishing as well as making the roadmap in drug design and discovery process for identification of novel therapeutics.

Keywords: epigenetics, spices, phytochemicals, fenchone

Procedia PDF Downloads 155
24666 Multidimensional Item Response Theory Models for Practical Application in Large Tests Designed to Measure Multiple Constructs

Authors: Maria Fernanda Ordoñez Martinez, Alvaro Mauricio Montenegro

Abstract:

This work presents a statistical methodology for measuring and founding constructs in Latent Semantic Analysis. This approach uses the qualities of Factor Analysis in binary data with interpretations present on Item Response Theory. More precisely, we propose initially reducing dimensionality with specific use of Principal Component Analysis for the linguistic data and then, producing axes of groups made from a clustering analysis of the semantic data. This approach allows the user to give meaning to previous clusters and found the real latent structure presented by data. The methodology is applied in a set of real semantic data presenting impressive results for the coherence, speed and precision.

Keywords: semantic analysis, factorial analysis, dimension reduction, penalized logistic regression

Procedia PDF Downloads 440
24665 Analysis of Production Forecasting in Unconventional Gas Resources Development Using Machine Learning and Data-Driven Approach

Authors: Dongkwon Han, Sangho Kim, Sunil Kwon

Abstract:

Unconventional gas resources have dramatically changed the future energy landscape. Unlike conventional gas resources, the key challenges in unconventional gas have been the requirement that applies to advanced approaches for production forecasting due to uncertainty and complexity of fluid flow. In this study, artificial neural network (ANN) model which integrates machine learning and data-driven approach was developed to predict productivity in shale gas. The database of 129 wells of Eagle Ford shale basin used for testing and training of the ANN model. The Input data related to hydraulic fracturing, well completion and productivity of shale gas were selected and the output data is a cumulative production. The performance of the ANN using all data sets, clustering and variables importance (VI) models were compared in the mean absolute percentage error (MAPE). ANN model using all data sets, clustering, and VI were obtained as 44.22%, 10.08% (cluster 1), 5.26% (cluster 2), 6.35%(cluster 3), and 32.23% (ANN VI), 23.19% (SVM VI), respectively. The results showed that the pre-trained ANN model provides more accurate results than the ANN model using all data sets.

Keywords: unconventional gas, artificial neural network, machine learning, clustering, variables importance

Procedia PDF Downloads 192
24664 The Case for Reparations: Systemic Injustice and Human Rights in the United States

Authors: Journey Whitfield

Abstract:

This study investigates the United States' ongoing violation of Black Americans' fundamental human rights, as evidenced by mass incarceration, social injustice, and economic deprivation. It argues that the U.S. contravenes Article 9 of the International Covenant on Civil and Political Rights through policies that uphold systemic racism. The analysis dissects current practices within the criminal justice system, social welfare programs, and economic policy, uncovering the racially disparate impacts of seemingly race-neutral policies. This study establishes a clear lineage between past systems of oppression – slavery and Jim Crow – and present-day racial disparities, demonstrating their inextricable link. The thesis proposes that only a comprehensive reparations program for Black Americans can begin to redress these systemic injustices. This program must transcend mere financial compensation, demanding structural reforms within U.S. institutions to dismantle systemic racism and promote transformative justice. This study explores potential forms of reparations, drawing upon historical precedents, comparative case studies from other nations, and contemporary debates within political philosophy and legal studies. The research employs both qualitative and quantitative methods. Qualitative methods include historical analysis of legal frameworks and policy documents, as well as discourse analysis of political rhetoric. Quantitative methods involve statistical analysis of socioeconomic data and criminal justice outcomes to expose racial disparities. This study makes a significant contribution to the existing literature on reparations, human rights, and racial injustice in the United States. It offers a rigorous analysis of the enduring consequences of historical oppression and advocates for bold, justice-centered solutions.

Keywords: Black Americans, reparations, mass incarceration, racial injustice, human rights, united states

Procedia PDF Downloads 55
24663 Modal Analysis of Small Frames using High Order Timoshenko Beams

Authors: Chadi Azoury, Assad Kallassy, Pierre Rahme

Abstract:

In this paper, we consider the modal analysis of small frames. Firstly, we construct the 3D model using H8 elements and find the natural frequencies of the frame focusing our attention on the modes in the XY plane. Secondly, we construct the 2D model (plane stress model) using Q4 elements. We concluded that the results of both models are very close to each other’s. Then we formulate the stiffness matrix and the mass matrix of the 3-noded Timoshenko beam that is well suited for thick and short beams like in our case. Finally, we model the corners where the horizontal and vertical bar meet with a special matrix. The results of our new model (3-noded Timoshenko beam for the horizontal and vertical bars and a special element for the corners based on the Q4 elements) are very satisfying when performing the modal analysis.

Keywords: corner element, high-order Timoshenko beam, Guyan reduction, modal analysis of frames, rigid link, shear locking, and short beams

Procedia PDF Downloads 313
24662 Procedure Model for Data-Driven Decision Support Regarding the Integration of Renewable Energies into Industrial Energy Management

Authors: M. Graus, K. Westhoff, X. Xu

Abstract:

The climate change causes a change in all aspects of society. While the expansion of renewable energies proceeds, industry could not be convinced based on general studies about the potential of demand side management to reinforce smart grid considerations in their operational business. In this article, a procedure model for a case-specific data-driven decision support for industrial energy management based on a holistic data analytics approach is presented. The model is executed on the example of the strategic decision problem, to integrate the aspect of renewable energies into industrial energy management. This question is induced due to considerations of changing the electricity contract model from a standard rate to volatile energy prices corresponding to the energy spot market which is increasingly more affected by renewable energies. The procedure model corresponds to a data analytics process consisting on a data model, analysis, simulation and optimization step. This procedure will help to quantify the potentials of sustainable production concepts based on the data from a factory. The model is validated with data from a printer in analogy to a simple production machine. The overall goal is to establish smart grid principles for industry via the transformation from knowledge-driven to data-driven decisions within manufacturing companies.

Keywords: data analytics, green production, industrial energy management, optimization, renewable energies, simulation

Procedia PDF Downloads 432
24661 Dissimilarity-Based Coloring for Symbolic and Multivariate Data Visualization

Authors: K. Umbleja, M. Ichino, H. Yaguchi

Abstract:

In this paper, we propose a coloring method for multivariate data visualization by using parallel coordinates based on dissimilarity and tree structure information gathered during hierarchical clustering. The proposed method is an extension for proximity-based coloring that suffers from a few undesired side effects if hierarchical tree structure is not balanced tree. We describe the algorithm by assigning colors based on dissimilarity information, show the application of proposed method on three commonly used datasets, and compare the results with proximity-based coloring. We found our proposed method to be especially beneficial for symbolic data visualization where many individual objects have already been aggregated into a single symbolic object.

Keywords: data visualization, dissimilarity-based coloring, proximity-based coloring, symbolic data

Procedia PDF Downloads 168
24660 The Impact of Data Science on Geography: A Review

Authors: Roberto Machado

Abstract:

We conducted a systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology, analyzing 2,996 studies and synthesizing 41 of them to explore the evolution of data science and its integration into geography. By employing optimization algorithms, we accelerated the review process, significantly enhancing the efficiency and precision of literature selection. Our findings indicate that data science has developed over five decades, facing challenges such as the diversified integration of data and the need for advanced statistical and computational skills. In geography, the integration of data science underscores the importance of interdisciplinary collaboration and methodological innovation. Techniques like large-scale spatial data analysis and predictive algorithms show promise in natural disaster management and transportation route optimization, enabling faster and more effective responses. These advancements highlight the transformative potential of data science in geography, providing tools and methodologies to address complex spatial problems. The relevance of this study lies in the use of optimization algorithms in systematic reviews and the demonstrated need for deeper integration of data science into geography. Key contributions include identifying specific challenges in combining diverse spatial data and the necessity for advanced computational skills. Examples of connections between these two fields encompass significant improvements in natural disaster management and transportation efficiency, promoting more effective and sustainable environmental solutions with a positive societal impact.

Keywords: data science, geography, systematic review, optimization algorithms, supervised learning

Procedia PDF Downloads 23
24659 Developing Structured Sizing Systems for Manufacturing Ready-Made Garments of Indian Females Using Decision Tree-Based Data Mining

Authors: Hina Kausher, Sangita Srivastava

Abstract:

In India, there is a lack of standard, systematic sizing approach for producing readymade garments. Garments manufacturing companies use their own created size tables by modifying international sizing charts of ready-made garments. The purpose of this study is to tabulate the anthropometric data which covers the variety of figure proportions in both height and girth. 3,000 data has been collected by an anthropometric survey undertaken over females between the ages of 16 to 80 years from some states of India to produce the sizing system suitable for clothing manufacture and retailing. This data is used for the statistical analysis of body measurements, the formulation of sizing systems and body measurements tables. Factor analysis technique is used to filter the control body dimensions from a large number of variables. Decision tree-based data mining is used to cluster the data. The standard and structured sizing system can facilitate pattern grading and garment production. Moreover, it can exceed buying ratios and upgrade size allocations to retail segments.

Keywords: anthropometric data, data mining, decision tree, garments manufacturing, sizing systems, ready-made garments

Procedia PDF Downloads 131
24658 A Framework on Data and Remote Sensing for Humanitarian Logistics

Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini

Abstract:

Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.

Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making

Procedia PDF Downloads 374
24657 Damping Function and Dynamic Simulation of GUPFC Using IC-HS Algorithm

Authors: Galu Papy Yuma

Abstract:

This paper presents a new dynamic simulation of a power system consisting of four machines equipped with the Generalized Unified Power Flow Controller (GUPFC) to improve power system stability. The dynamic simulation of the GUPFC consists of one shunt converter and two series converters based on voltage source converter, and DC link capacitor installed in the power system. MATLAB/Simulink is used to arrange the dynamic simulation of the GUPFC, where the power system is simulated in order to investigate the impact of the controller on power system oscillation damping and to show the simulation program reliability. The Improved Chaotic- Harmony Search (IC-HS) Algorithm is used to provide the parameter controller in order to lead-lag compensation design. The results obtained by simulation show that the power system with four machines is suitable for stability analysis. The use of GUPFC and IC-HS Algorithm provides the excellent capability in fast damping of power system oscillations and improve greatly the dynamic stability of the power system.

Keywords: GUPFC, IC-HS algorithm, Matlab/Simulink, damping oscillation

Procedia PDF Downloads 445
24656 Facility Data Model as Integration and Interoperability Platform

Authors: Nikola Tomasevic, Marko Batic, Sanja Vranes

Abstract:

Emerging Semantic Web technologies can be seen as the next step in evolution of the intelligent facility management systems. Particularly, this considers increased usage of open source and/or standardized concepts for data classification and semantic interpretation. To deliver such facility management systems, providing the comprehensive integration and interoperability platform in from of the facility data model is a prerequisite. In this paper, one of the possible modelling approaches to provide such integrative facility data model which was based on the ontology modelling concept was presented. Complete ontology development process, starting from the input data acquisition, ontology concepts definition and finally ontology concepts population, was described. At the beginning, the core facility ontology was developed representing the generic facility infrastructure comprised of the common facility concepts relevant from the facility management perspective. To develop the data model of a specific facility infrastructure, first extension and then population of the core facility ontology was performed. For the development of the full-blown facility data models, Malpensa and Fiumicino airports in Italy, two major European air-traffic hubs, were chosen as a test-bed platform. Furthermore, the way how these ontology models supported the integration and interoperability of the overall airport energy management system was analyzed as well.

Keywords: airport ontology, energy management, facility data model, ontology modeling

Procedia PDF Downloads 445
24655 Merchants’ Attitudes towards Tourism Development in Mahane Yehuda Market: A Case Study

Authors: Rotem Mashkov, Noam Shoval

Abstract:

In an age when a tourist’s gaze is more focused on the daily lives of locals, it is evident that local food markets are being rediscovered. Traditional urban markets succeed in reinventing themselves as a space for consumption, recreation, and culture, enabling authentic experiences and interpersonal interactions with the local culture. Alongside this, the pressure of tourism development may result in commercialization and retail gentrification to the point of losing the sense of local identity. The issue of finding a balance between tourism development and the preservation of unique local features is at the heart of this study and is being tested using the case of the Mahane Yehuda market in Jerusalem. The research question—how merchants respond to tourism development in the Mahane Yehuda food market— focuses on local traders, a group of players who are usually absent from the research arenas, although they influence tourism development as well as influenced by it. Three main research methods were integrated into this study. The first two methods, a survey of articles survey and comparative mapping of the business mix, were used to characterize the changes in the Mahane Yehuda market both consciously and physically. The third research method, involving in-depth interviews with merchants, was used to examine the traders' attitudes and responses to tourism development. The findings indicate that there has been a turnaround in the market image over the past decade and a half. Additionally, there has been a significant physical change in the business mix, reflected by a decline of 15% in the number of stalls selling food products and delicacies. The data from the interviews on the traders’ attitudes towards tourism development were inconclusive; there were disagreements among the traders about the economic contribution of tourism development in relation to their dependence on the tourism industry. However, there was a consensus on the need for authentic elements in the marketplace. The findings of the study also indicate a strong link between the merchants’ response to tourism development and their stall ownership status as the merchant could exercise their position in various ways depending on the possession type.

Keywords: business mix, Jerusalem, local food markets, Mahane Yehuda market, merchants’ attitude, ownership status, retail gentrification, tourism development, traditional urban markets

Procedia PDF Downloads 133
24654 Annotation Ontology for Semantic Web Development

Authors: Hadeel Al Obaidy, Amani Al Heela

Abstract:

The main purpose of this paper is to examine the concept of semantic web and the role that ontology and semantic annotation plays in the development of semantic web services. The paper focuses on semantic web infrastructure illustrating how ontology and annotation work to provide the learning capabilities for building content semantically. To improve productivity and quality of software, the paper applies approaches, notations and techniques offered by software engineering. It proposes a conceptual model to develop semantic web services for the infrastructure of web information retrieval system of digital libraries. The developed system uses ontology and annotation to build a knowledge based system to define and link the meaning of a web content to retrieve information for users’ queries. The results are more relevant through keywords and ontology rule expansion that will be more accurate to satisfy the requested information. The level of results accuracy would be enhanced since the query semantically analyzed work with the conceptual architecture of the proposed system.

Keywords: semantic web services, software engineering, semantic library, knowledge representation, ontology

Procedia PDF Downloads 169
24653 Video-On-Demand QoE Evaluation across Different Age-Groups and Its Significance for Network Capacity

Authors: Mujtaba Roshan, John A. Schormans

Abstract:

Quality of Experience (QoE) drives churn in the broadband networks industry, and good QoE plays a large part in the retention of customers. QoE is known to be affected by the Quality of Service (QoS) factors packet loss probability (PLP), delay and delay jitter caused by the network. Earlier results have shown that the relationship between these QoS factors and QoE is non-linear, and may vary from application to application. We use the network emulator Netem as the basis for experimentation, and evaluate how QoE varies as we change the emulated QoS metrics. Focusing on Video-on-Demand, we discovered that the reported QoE may differ widely for users of different age groups, and that the most demanding age group (the youngest) can require an order of magnitude lower PLP to achieve the same QoE than is required by the most widely studied age group of users. We then used a bottleneck TCP model to evaluate the capacity cost of achieving an order of magnitude decrease in PLP, and found it be (almost always) a 3-fold increase in link capacity that was required.

Keywords: network capacity, packet loss probability, quality of experience, quality of service

Procedia PDF Downloads 272
24652 The Impact of Artificial Intelligence on Construction Engineering

Authors: Mina Fawzy Ishak Gad Elsaid

Abstract:

There is a strong link between technology and development. Architecture as a profession is a call to service and society. Maybe next to soldiers, engineers and patriots. However, unlike soldiers, they always remain employees of society under all circumstances. Despite the construction profession's role in society, there appears to be a lack of respect as some projects fail. This paper focuses on the need to improve development engineering performance in developing countries, using engineering education in Nigerian universities as a tool for discussion. A purposeful survey, interviews and focus group discussions were conducted on one hundred and twenty (120) prominent companies in Nigeria. The subject is approached through a large number of projects that companies have been involved in from the planning stage, some of which have been completed and even reached the maintenance and monitoring stage. It has been found that certain factors beyond the control of engineers are hindering the full development and success of the construction sector in developing countries. The main culprit is corruption and its eradication will put the country on a stable path to develop construction and combat poverty.

Keywords: decision analysis, industrial engineering, direct vs. indirect values, engineering management

Procedia PDF Downloads 33
24651 The Impact of Artificial Intelligence on Construction Engineering

Authors: Haneen Joseph Habib Yeldoka

Abstract:

There is a strong link between technology and development. Architecture as a profession is a call to service and society. Maybe next to soldiers, engineers and patriots. However, unlike soldiers, they always remain employees of society under all circumstances. Despite the construction profession's role in society, there appears to be a lack of respect as some projects fail. This paper focuses on the need to improve development engineering performance in developing countries, using engineering education in Nigerian universities as a tool for discussion. A purposeful survey, interviews and focus group discussions were conducted on one hundred and twenty (120) prominent companies in Nigeria. The subject is approached through a large number of projects that companies have been involved in from the planning stage, some of which have been completed and even reached the maintenance and monitoring stage. It has been found that certain factors beyond the control of engineers are hindering the full development and success of the construction sector in developing countries. The main culprit is corruption and its eradication will put the country on a stable path to develop construction and combat poverty.

Keywords: decision analysis, industrial engineering, direct vs. indirect values, engineering management

Procedia PDF Downloads 31
24650 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 103