Search results for: missing data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25447

Search results for: missing data

24577 The Face Sync-Smart Attendance

Authors: Bekkem Chakradhar Reddy, Y. Soni Priya, Mathivanan G., L. K. Joshila Grace, N. Srinivasan, Asha P.

Abstract:

Currently, there are a lot of problems related to marking attendance in schools, offices, or other places. Organizations tasked with collecting daily attendance data have numerous concerns. There are different ways to mark attendance. The most commonly used method is collecting data manually by calling each student. It is a longer process and problematic. Now, there are a lot of new technologies that help to mark attendance automatically. It reduces work and records the data. We have proposed to implement attendance marking using the latest technologies. We have implemented a system based on face identification and analyzing faces. The project is developed by gathering faces and analyzing data, using deep learning algorithms to recognize faces effectively. The data is recorded and forwarded to the host through mail. The project was implemented in Python and Python libraries used are CV2, Face Recognition, and Smtplib.

Keywords: python, deep learning, face recognition, CV2, smtplib, Dlib.

Procedia PDF Downloads 58
24576 Geographical Data Visualization Using Video Games Technologies

Authors: Nizar Karim Uribe-Orihuela, Fernando Brambila-Paz, Ivette Caldelas, Rodrigo Montufar-Chaveznava

Abstract:

In this paper, we present the advances corresponding to the implementation of a strategy to visualize geographical data using a Software Development Kit (SDK) for video games. We use multispectral images from Landsat 7 platform and Laser Imaging Detection and Ranging (LIDAR) data from The National Institute of Geography and Statistics of Mexican (INEGI). We select a place of interest to visualize from Landsat platform and make some processing to the image (rotations, atmospheric correction and enhancement). The resulting image will be our gray scale color-map to fusion with the LIDAR data, which was selected using the same coordinates than in Landsat. The LIDAR data is translated to 8-bit raw data. Both images are fused in a software developed using Unity (an SDK employed for video games). The resulting image is then displayed and can be explored moving around. The idea is the software could be used for students of geology and geophysics at the Engineering School of the National University of Mexico. They will download the software and images corresponding to a geological place of interest to a smartphone and could virtually visit and explore the site with a virtual reality visor such as Google cardboard.

Keywords: virtual reality, interactive technologies, geographical data visualization, video games technologies, educational material

Procedia PDF Downloads 247
24575 Nonparametric Sieve Estimation with Dependent Data: Application to Deep Neural Networks

Authors: Chad Brown

Abstract:

This paper establishes general conditions for the convergence rates of nonparametric sieve estimators with dependent data. We present two key results: one for nonstationary data and another for stationary mixing data. Previous theoretical results often lack practical applicability to deep neural networks (DNNs). Using these conditions, we derive convergence rates for DNN sieve estimators in nonparametric regression settings with both nonstationary and stationary mixing data. The DNN architectures considered adhere to current industry standards, featuring fully connected feedforward networks with rectified linear unit activation functions, unbounded weights, and a width and depth that grows with sample size.

Keywords: sieve extremum estimates, nonparametric estimation, deep learning, neural networks, rectified linear unit, nonstationary processes

Procedia PDF Downloads 44
24574 Development of Risk Management System for Urban Railroad Underground Structures and Surrounding Ground

Authors: Y. K. Park, B. K. Kim, J. W. Lee, S. J. Lee

Abstract:

To assess the risk of the underground structures and surrounding ground, we collect basic data by the engineering method of measurement, exploration and surveys and, derive the risk through proper analysis and each assessment for urban railroad underground structures and surrounding ground including station inflow. Basic data are obtained by the fiber-optic sensors, MEMS sensors, water quantity/quality sensors, tunnel scanner, ground penetrating radar, light weight deflectometer, and are evaluated if they are more than the proper value or not. Based on these data, we analyze the risk level of urban railroad underground structures and surrounding ground. And we develop the risk management system to manage efficiently these data and to support a convenient interface environment at input/output of data.

Keywords: urban railroad, underground structures, ground subsidence, station inflow, risk

Procedia PDF Downloads 336
24573 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: big data, machine learning, smart city, social cost, transportation network

Procedia PDF Downloads 262
24572 The European Pharmacy Market: The Density and its Influencing Factors

Authors: Selina Schwaabe

Abstract:

Community pharmacies deliver high-quality health care and are responsible for medication safety. During the pandemic, accessibility to the nearest pharmacy became more essential to get vaccinated against Covid-19 and to get medical aid. The government's goal is to ensure nationwide, reachable, and affordable medical health care services by pharmacies. Therefore, the density of community pharmacies matters. Overall, the density of community pharmacies is fluctuating, with slightly decreasing tendencies in some countries. So far, the literature has shown that changes in the system affect prices and density. However, a European overview of the development of the density of community pharmacies and its triggers is still missing. This research is essential to counteract against decreasing density consulting in a lack of professional health care through pharmacies. The analysis focuses on liberal versus regulated market structures, mail-order prescription drug regulation, and third-party ownership consequences. In a panel analysis, the relative influence of the measures is examined across 27 European countries over the last 21 years. In addition, the paper examines seven selected countries in depth, selected for the substantial variance in their pharmacy system: Germany, Austria, Portugal, Denmark, Sweden, Finland and Poland. Overall, the results show that regulated pharmacy markets have over 10.75 pharmacies/100.000 inhabitants more than liberal markets. Further, mail-order prescription drugs decrease the density by -17.98 pharmacies/100.000 inhabitants. Countries allowing third-party ownership have 7.67 pharmacies/100.000 inhabitants more. The results are statistically significant at a 0.001 level. The output of this analysis recommends regulated pharmacy markets, with a ban on mail-order prescription drugs allowing third-party ownership to support nationwide medical health care through community pharmacies.

Keywords: community pharmacy, market conditions, pharmacy, pharmacy market, pharmacy lobby, prescription, e-prescription, ownership structures

Procedia PDF Downloads 133
24571 Integrated Model for Enhancing Data Security Performance in Cloud Computing

Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali

Abstract:

Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.

Keywords: cloud Ccomputing, data security, SAAS, PAAS, IAAS, Blowfish

Procedia PDF Downloads 478
24570 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks

Authors: Wang Yichen, Haruka Yamashita

Abstract:

In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.

Keywords: recurrent neural network, players lineup, basketball data, decision making model

Procedia PDF Downloads 134
24569 Challenges in Multi-Cloud Storage Systems for Mobile Devices

Authors: Rajeev Kumar Bedi, Jaswinder Singh, Sunil Kumar Gupta

Abstract:

The demand for cloud storage is increasing because users want continuous access their data. Cloud Storage revolutionized the way how users access their data. A lot of cloud storage service providers are available as DropBox, G Drive, and providing limited free storage and for extra storage; users have to pay money, which will act as a burden on users. To avoid the issue of limited free storage, the concept of Multi Cloud Storage introduced. In this paper, we will discuss the limitations of existing Multi Cloud Storage systems for mobile devices.

Keywords: cloud storage, data privacy, data security, multi cloud storage, mobile devices

Procedia PDF Downloads 700
24568 Talent Management through Integration of Talent Value Chain and Human Capital Analytics Approaches

Authors: Wuttigrai Ngamsirijit

Abstract:

Talent management in today’s modern organizations has become data-driven due to a demand for objective human resource decision making and development of analytics technologies. HR managers have been faced with some obstacles in exploiting data and information to obtain their effective talent management decisions. These include process-based data and records; insufficient human capital-related measures and metrics; lack of capabilities in data modeling in strategic manners; and, time consuming to add up numbers and make decisions. This paper proposes a framework of talent management through integration of talent value chain and human capital analytics approaches. It encompasses key data, measures, and metrics regarding strategic talent management decisions along the organizational and talent value chain. Moreover, specific predictive and prescriptive models incorporating these data and information are recommended to help managers in understanding the state of talent, gaps in managing talent and the organization, and the ways to develop optimized talent strategies.    

Keywords: decision making, human capital analytics, talent management, talent value chain

Procedia PDF Downloads 188
24567 Real-Time Neuroimaging for Rehabilitation of Stroke Patients

Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge

Abstract:

Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).

Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation

Procedia PDF Downloads 388
24566 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 259
24565 Modelling and Optimization of a Combined Sorption Enhanced Biomass Gasification with Hydrothermal Carbonization, Hot Gas Cleaning and Dielectric Barrier Discharge Plasma Reactor to Produce Pure H₂ and Methanol Synthesis

Authors: Vera Marcantonio, Marcello De Falco, Mauro Capocelli, Álvaro Amado-Fierro, Teresa A. Centeno, Enrico Bocci

Abstract:

Concerns about energy security, energy prices, and climate change led scientific research towards sustainable solutions to fossil fuel as renewable energy sources coupled with hydrogen as an energy vector and carbon capture and conversion technologies. Among the technologies investigated in the last decades, biomass gasification acquired great interest owing to the possibility of obtaining low-cost and CO₂ negative emission hydrogen production from a large variety of everywhere available organic wastes. Upstream and downstream treatment were then studied in order to maximize hydrogen yield, reduce the content of organic and inorganic contaminants under the admissible levels for the technologies which are coupled with, capture, and convert carbon dioxide. However, studies which analyse a whole process made of all those technologies are still missing. In order to fill this lack, the present paper investigated the coexistence of hydrothermal carbonization (HTC), sorption enhance gasification (SEG), hot gas cleaning (HGC), and CO₂ conversion by dielectric barrier discharge (DBD) plasma reactor for H₂ production from biomass waste by means of Aspen Plus software. The proposed model aimed to identify and optimise the performance of the plant by varying operating parameters (such as temperature, CaO/biomass ratio, separation efficiency, etc.). The carbon footprint of the global plant is 2.3 kg CO₂/kg H₂, lower than the latest limit value imposed by the European Commission to consider hydrogen as “clean”, that was set to 3 kg CO₂/kg H₂. The hydrogen yield referred to the whole plant is 250 gH₂/kgBIOMASS.

Keywords: biomass gasification, hydrogen, aspen plus, sorption enhance gasification

Procedia PDF Downloads 81
24564 Sampled-Data Model Predictive Tracking Control for Mobile Robot

Authors: Wookyong Kwon, Sangmoon Lee

Abstract:

In this paper, a sampled-data model predictive tracking control method is presented for mobile robots which is modeled as constrained continuous-time linear parameter varying (LPV) systems. The presented sampled-data predictive controller is designed by linear matrix inequality approach. Based on the input delay approach, a controller design condition is derived by constructing a new Lyapunov function. Finally, a numerical example is given to demonstrate the effectiveness of the presented method.

Keywords: model predictive control, sampled-data control, linear parameter varying systems, LPV

Procedia PDF Downloads 310
24563 Development of Typical Meteorological Year for Passive Cooling Applications Using World Weather Data

Authors: Nasser A. Al-Azri

Abstract:

The effectiveness of passive cooling techniques is assessed based on bioclimatic charts that require the typical meteorological year (TMY) for a specified location for their development. However, TMYs are not always available; mainly due to the scarcity of records of solar radiation which is an essential component used in developing common TMYs intended for general uses. Since solar radiation is not required in the development of the bioclimatic chart, this work suggests developing TMYs based solely on the relevant parameters. This approach improves the accuracy of the developed TMY since only the relevant parameters are considered and it also makes the development of the TMY more accessible since solar radiation data are not used. The presented paper will also discuss the development of the TMY from the raw data available at the NOAA-NCDC archive of world weather data and the construction of the bioclimatic charts for some randomly selected locations around the world.

Keywords: bioclimatic charts, passive cooling, TMY, weather data

Procedia PDF Downloads 241
24562 Development of Management System of the Experience of Defensive Modeling and Simulation by Data Mining Approach

Authors: D. Nam Kim, D. Jin Kim, Jeonghwan Jeon

Abstract:

Defense Defensive Modeling and Simulation (M&S) is a system which enables impracticable training for reducing constraints of time, space and financial resources. The necessity of defensive M&S has been increasing not only for education and training but also virtual fight. Soldiers who are using defensive M&S for education and training will obtain empirical knowledge and know-how. However, the obtained knowledge of individual soldiers have not been managed and utilized yet since the nature of military organizations: confidentiality and frequent change of members. Therefore, this study aims to develop a management system for the experience of defensive M&S based on data mining approach. Since individual empirical knowledge gained through using the defensive M&S is both quantitative and qualitative data, data mining approach is appropriate for dealing with individual empirical knowledge. This research is expected to be helpful for soldiers and military policy makers.

Keywords: data mining, defensive m&s, management system, knowledge management

Procedia PDF Downloads 255
24561 Timely Detection and Identification of Abnormalities for Process Monitoring

Authors: Hyun-Woo Cho

Abstract:

The detection and identification of multivariate manufacturing processes are quite important in order to maintain good product quality. Unusual behaviors or events encountered during its operation can have a serious impact on the process and product quality. Thus they should be detected and identified as soon as possible. This paper focused on the efficient representation of process measurement data in detecting and identifying abnormalities. This qualitative method is effective in representing fault patterns of process data. In addition, it is quite sensitive to measurement noise so that reliable outcomes can be obtained. To evaluate its performance a simulation process was utilized, and the effect of adopting linear and nonlinear methods in the detection and identification was tested with different simulation data. It has shown that the use of a nonlinear technique produced more satisfactory and more robust results for the simulation data sets. This monitoring framework can help operating personnel to detect the occurrence of process abnormalities and identify their assignable causes in an on-line or real-time basis.

Keywords: detection, monitoring, identification, measurement data, multivariate techniques

Procedia PDF Downloads 237
24560 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory

Authors: Xiaochen Mu

Abstract:

Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.

Keywords: data protection, property rights, intellectual property, Big data

Procedia PDF Downloads 41
24559 The Influence of Housing Choice Vouchers on the Private Rental Market

Authors: Randy D. Colon

Abstract:

Through a freedom of information request, data pertaining to Housing Choice Voucher (HCV) households has been obtained from the Chicago Housing Authority, including rent price and number of bedrooms per HCV household, community area, and zip code from 2013 to the first quarter of 2018. Similar data pertaining to the private rental market will be obtained through public records found through the United States Department of Housing and Urban Development. The datasets will be analyzed through statistical and mapping software to investigate the potential link between HCV households and distorted rent prices. Quantitative data will be supplemented by qualitative data to investigate the lived experience of Chicago residents. Qualitative data will be collected at community meetings in the Chicago Englewood neighborhood through participation in neighborhood meetings and informal interviews with residents and community leaders. The qualitative data will be used to gain insight on the lived experience of community leaders and residents of the Englewood neighborhood in relation to housing, the rental market, and HCV. While there is an abundance of quantitative data on this subject, this qualitative data is necessary to capture the lived experience of local residents effected by a changing rental market. This topic reflects concerns voiced by members of the Englewood community, and this study aims to keep the community relevant in its findings.

Keywords: Chicago, housing, housing choice voucher program, housing subsidies, rental market

Procedia PDF Downloads 119
24558 Exploring Individual Decision Making Processes and the Role of Information Structure in Promoting Uptake of Energy Efficient Technologies

Authors: Rebecca J. Hafner, Daniel Read, David Elmes

Abstract:

The current research applies decision making theory in order to address the problem of increasing uptake of energy-efficient technologies in the market place, where uptake is currently slower than one might predict following rational choice models. Specifically, in two studies we apply the alignable/non-alignable features effect and explore the impact of varying information structure on the consumers’ preference for standard versus energy efficient technologies. As researchers in the Interdisciplinary centre for Storage, Transformation and Upgrading of Thermal Energy (i-STUTE) are currently developing energy efficient heating systems for homes and businesses, we focus on the context of home heating choice, and compare preference for a standard condensing boiler versus an energy efficient heat pump, according to experimental manipulations in the structure of prior information. In Study 1, we find that people prefer stronger alignable features when options are similar; an effect which is mediated by an increased tendency to infer missing information is the same. Yet, in contrast to previous research, we find no effects of alignability on option preference when options differ. The advanced methodological approach used here, which is the first study of its kind to randomly allocate features as either alignable or non-alignable, highlights potential design effects in previous work. Study 2 is designed to explore the interaction between alignability and construal level as an explanation for the shift in attentional focus when options differ. Theoretical and applied implications for promoting energy efficient technologies are discussed.

Keywords: energy-efficient technologies, decision-making, alignability effects, construal level theory, CO2 reduction

Procedia PDF Downloads 331
24557 The Dynamic Metadata Schema in Neutron and Photon Communities: A Case Study of X-Ray Photon Correlation Spectroscopy

Authors: Amir Tosson, Mohammad Reza, Christian Gutt

Abstract:

Metadata stands at the forefront of advancing data management practices within research communities, with particular significance in the realms of neutron and photon scattering. This paper introduces a groundbreaking approach—dynamic metadata schema—within the context of X-ray Photon Correlation Spectroscopy (XPCS). XPCS, a potent technique unravelling nanoscale dynamic processes, serves as an illustrative use case to demonstrate how dynamic metadata can revolutionize data acquisition, sharing, and analysis workflows. This paper explores the challenges encountered by the neutron and photon communities in navigating intricate data landscapes and highlights the prowess of dynamic metadata in addressing these hurdles. Our proposed approach empowers researchers to tailor metadata definitions to the evolving demands of experiments, thereby facilitating streamlined data integration, traceability, and collaborative exploration. Through tangible examples from the XPCS domain, we showcase how embracing dynamic metadata standards bestows advantages, enhancing data reproducibility, interoperability, and the diffusion of knowledge. Ultimately, this paper underscores the transformative potential of dynamic metadata, heralding a paradigm shift in data management within the neutron and photon research communities.

Keywords: metadata, FAIR, data analysis, XPCS, IoT

Procedia PDF Downloads 64
24556 Influence of Magnetic Field on the Antibacterial Properties of Pine Oil

Authors: Dawid Sołoducha, Tomasz Borowski, Agata Markowska-Szczupak, Aneta Wesołowska, Marian Kordas, Rafał Rakoczy

Abstract:

Many studies report varied effects of the magnetic field in medicine, but applications are still missing. Also, essential oils (EOs) were historically used in healing therapies, food preservation and the cosmetic industry due to their wound healing and antioxidant properties and antimicrobial activity. Unfortunately, the chemical characterization of EOs activates its antibacterial action only at a fairly high concentration. They can cause skin reactions, e.g., irritation (irritant contact dermatitis) or allergic contact dermatitis; therefore, they should always be used with caution. However, the administration of EOs to achieve the desired antimicrobial activity and stability with long-term medical usage in low concentration is challenging. The aim of this work was to investigate the antimicrobial activity of commercial Pinus sylvestris L. essential oil from Polish company Avicenna-Oil® under Rotating Magnetic Field (RMF) at f = 1 – 50 Hz. The novel construction of the magnetically assisted self-constructed reactor (MAP) was applied for this study. The chemical composition of essential pine oil was determined by gas chromatography coupled with mass spectrometry (GC-MS). Model bacteria Escherichia coli K12 (ATCC 25922) was used. Different concentrations of pine oil was prepared: 100% 50%, 25%, 12.5% and 6.25%. The disc diffusion and MIC test were done. To examine the effect of essential pine oil and rotating magnetic field RMF on antibacterial performance agar plate method was used. Pine oil consist of α-pinene (28.58%), β-pinene (17.79%), δ-3-carene (14.17%) and limonene (11.58%). The present study indicates the exposition to the RMF, as compared to the unexposed controls causing an increase in the efficacy of antibacterial properties of pine oil. We have shown that the rotating magnetic fields (RMF) at a frequency, f, between 25 Hz to 50 Hz, increase the antimicrobial efficiency of oil at lower than 50% concentration. The new method can be applied in many fields e.g. aromatherapy, medicine as a component of dressing, or as food preservatives.

Keywords: rotating magnetic field, pine oil, antimicrobial activity, Escherichia coli

Procedia PDF Downloads 220
24555 Exploring SSD Suitable Allocation Schemes Incompliance with Workload Patterns

Authors: Jae Young Park, Hwansu Jung, Jong Tae Kim

Abstract:

Whether the data has been well parallelized is an important factor in the Solid-State-Drive (SSD) performance. SSD parallelization is affected by allocation scheme and it is directly connected to SSD performance. There are dynamic allocation and static allocation in representative allocation schemes. Dynamic allocation is more adaptive in exploiting write operation parallelism, while static allocation is better in read operation parallelism. Therefore, it is hard to select the appropriate allocation scheme when the workload is mixed read and write operations. We simulated conditions on a few mixed data patterns and analyzed the results to help the right choice for better performance. As the results, if data arrival interval is long enough prior operations to be finished and continuous read intensive data environment static allocation is more suitable. Dynamic allocation performs the best on write performance and random data patterns.

Keywords: dynamic allocation, NAND flash based SSD, SSD parallelism, static allocation

Procedia PDF Downloads 340
24554 Social Data Aggregator and Locator of Knowledge (STALK)

Authors: Rashmi Raghunandan, Sanjana Shankar, Rakshitha K. Bhat

Abstract:

Social media contributes a vast amount of data and information about individuals to the internet. This project will greatly reduce the need for unnecessary manual analysis of large and diverse social media profiles by filtering out and combining the useful information from various social media profiles, eliminating irrelevant data. It differs from the existing social media aggregators in that it does not provide a consolidated view of various profiles. Instead, it provides consolidated INFORMATION derived from the subject’s posts and other activities. It also allows analysis over multiple profiles and analytics based on several profiles. We strive to provide a query system to provide a natural language answer to questions when a user does not wish to go through the entire profile. The information provided can be filtered according to the different use cases it is used for.

Keywords: social network, analysis, Facebook, Linkedin, git, big data

Procedia PDF Downloads 444
24553 Data Integrity between Ministry of Education and Private Schools in the United Arab Emirates

Authors: Rima Shishakly, Mervyn Misajon

Abstract:

Education is similar to other businesses and industries. Achieving data integrity is essential in order to attain a significant supporting for all the stakeholders in the educational sector. Efficient data collect, flow, processing, storing and retrieving are vital in order to deliver successful solutions to the different stakeholders. Ministry of Education (MOE) in United Arab Emirates (UAE) has adopted ‘Education 2020’ a series of five-year plans designed to introduce advanced education management information systems. As part of this program, in 2010 MOE implemented Student Information Systems (SIS) to manage and monitor the students’ data and information flow between MOE and international private schools in UAE. This paper is going to discuss data integrity concerns between MOE, and private schools. The paper will clarify the data integrity issues and will indicate the challenges that face private schools in UAE.

Keywords: education management information systems (EMIS), student information system (SIS), United Arab Emirates (UAE), ministry of education (MOE), (KHDA) the knowledge and human development authority, Abu Dhabi educational counsel (ADEC)

Procedia PDF Downloads 222
24552 Towards a Balancing Medical Database by Using the Least Mean Square Algorithm

Authors: Kamel Belammi, Houria Fatrim

Abstract:

imbalanced data set, a problem often found in real world application, can cause seriously negative effect on classification performance of machine learning algorithms. There have been many attempts at dealing with classification of imbalanced data sets. In medical diagnosis classification, we often face the imbalanced number of data samples between the classes in which there are not enough samples in rare classes. In this paper, we proposed a learning method based on a cost sensitive extension of Least Mean Square (LMS) algorithm that penalizes errors of different samples with different weight and some rules of thumb to determine those weights. After the balancing phase, we applythe different classifiers (support vector machine (SVM), k- nearest neighbor (KNN) and multilayer neuronal networks (MNN)) for balanced data set. We have also compared the obtained results before and after balancing method.

Keywords: multilayer neural networks, k- nearest neighbor, support vector machine, imbalanced medical data, least mean square algorithm, diabetes

Procedia PDF Downloads 533
24551 Adverse Curing Conditions and Performance of Concrete: Bangladesh Perspective

Authors: T. Manzur

Abstract:

Concrete is the predominant construction material in Bangladesh. In large projects, stringent quality control procedures are usually followed under the supervision of experienced engineers and skilled labors. However, in the case of small projects and particularly at distant locations from major cities, proper quality control is often an issue. It has been found from experience that such quality related issues mainly arise from inappropriate proportioning of concrete mixes and improper curing conditions. In most cases external curing method is followed which requires supply of adequate quantity of water along with proper protection against evaporation. Often these conditions are found missing in the general construction sites and eventually lead to production of weaker concrete both in terms of strength and durability. In this study, an attempt has been made to investigate the performance of general concreting works of the country when subjected to several adverse curing conditions that are quite common in various small to medium construction sites. A total of six different types of adverse curing conditions were simulated in the laboratory and samples were kept under those conditions for several days. A set of samples was also submerged in normal curing condition having proper supply of curing water. Performance of concrete was evaluated in terms of compressive strength, tensile strength, chloride permeability and drying shrinkage. About 37% and 25% reduction in 28-day compressive and tensile strength were observed respectively, for samples subjected to most adverse curing condition as compared to the samples under normal curing conditions. Normal curing concrete exhibited moderate permeability (close to low permeability) whereas concrete under adverse curing conditions showed very high permeability values. Similar results were also obtained for shrinkage tests. This study, thus, will assist concerned engineers and supervisors to understand the importance of quality assurance during the curing period of concrete.

Keywords: adverse, concrete, curing, compressive strength, drying shrinkage, permeability, tensile strength

Procedia PDF Downloads 202
24550 Data Protection, Data Privacy, Research Ethics in Policy Process Towards Effective Urban Planning Practice for Smart Cities

Authors: Eugenio Ferrer Santiago

Abstract:

The growing complexities of the modern world on high-end gadgets, software applications, scams, identity theft, and Artificial Intelligence (AI) make the “uninformed” the weak and vulnerable to be victims of cybercrimes. Artificial Intelligence is not a new thing in our daily lives; the principles of database management, logical programming, and garbage in and garbage out are all connected to AI. The Philippines had in place legal safeguards against the abuse of cyberspace, but self-regulation of key industry players and self-protection by individuals are primordial to attain the success of these initiatives. Data protection, Data Privacy, and Research Ethics must work hand in hand during the policy process in the course of urban planning practice in different environments. This paper focuses on the interconnection of data protection, data privacy, and research ethics in coming up with clear-cut policies against perpetrators in the urban planning professional practice relevant in sustainable communities and smart cities. This paper shall use expository methodology under qualitative research using secondary data from related literature, interviews/blogs, and the World Wide Web resources. The claims and recommendations of this paper will help policymakers and implementers in the policy cycle. This paper shall contribute to the body of knowledge as a simple treatise and communication channel to the reading community and future researchers to validate the claims and start an intellectual discourse for better knowledge generation for the good of all in the near future.

Keywords: data privacy, data protection, urban planning, research ethics

Procedia PDF Downloads 60
24549 Review of the Road Crash Data Availability in Iraq

Authors: Abeer K. Jameel, Harry Evdorides

Abstract:

Iraq is a middle income country where the road safety issue is considered one of the leading causes of deaths. To control the road risk issue, the Iraqi Ministry of Planning, General Statistical Organization started to organise a collection system of traffic accidents data with details related to their causes and severity. These data are published as an annual report. In this paper, a review of the available crash data in Iraq will be presented. The available data represent the rate of accidents in aggregated level and classified according to their types, road users’ details, and crash severity, type of vehicles, causes and number of causalities. The review is according to the types of models used in road safety studies and research, and according to the required road safety data in the road constructions tasks. The available data are also compared with the road safety dataset published in the United Kingdom as an example of developed country. It is concluded that the data in Iraq are suitable for descriptive and exploratory models, aggregated level comparison analysis, and evaluation and monitoring the progress of the overall traffic safety performance. However, important traffic safety studies require disaggregated level of data and details related to the factors of the likelihood of traffic crashes. Some studies require spatial geographic details such as the location of the accidents which is essential in ranking the roads according to their level of safety, and name the most dangerous roads in Iraq which requires tactic plan to control this issue. Global Road safety agencies interested in solve this problem in low and middle-income countries have designed road safety assessment methodologies which are basing on the road attributes data only. Therefore, in this research it is recommended to use one of these methodologies.

Keywords: road safety, Iraq, crash data, road risk assessment, The International Road Assessment Program (iRAP)

Procedia PDF Downloads 256
24548 Eliciting and Confirming Data, Information, Knowledge and Wisdom in a Specialist Health Care Setting - The Wicked Method

Authors: Sinead Impey, Damon Berry, Selma Furtado, Miriam Galvin, Loretto Grogan, Orla Hardiman, Lucy Hederman, Mark Heverin, Vincent Wade, Linda Douris, Declan O'Sullivan, Gaye Stephens

Abstract:

Healthcare is a knowledge-rich environment. This knowledge, while valuable, is not always accessible outside the borders of individual clinics. This research aims to address part of this problem (at a study site) by constructing a maximal data set (knowledge artefact) for motor neurone disease (MND). This data set is proposed as an initial knowledge base for a concurrent project to develop an MND patient data platform. It represents the domain knowledge at the study site for the duration of the research (12 months). A knowledge elicitation method was also developed from the lessons learned during this process - the WICKED method. WICKED is an anagram of the words: eliciting and confirming data, information, knowledge, wisdom. But it is also a reference to the concept of wicked problems, which are complex and challenging, as is eliciting expert knowledge. The method was evaluated at a second site, and benefits and limitations were noted. Benefits include that the method provided a systematic way to manage data, information, knowledge and wisdom (DIKW) from various sources, including healthcare specialists and existing data sets. Limitations surrounded the time required and how the data set produced only represents DIKW known during the research period. Future work is underway to address these limitations.

Keywords: healthcare, knowledge acquisition, maximal data sets, action design science

Procedia PDF Downloads 367