Search results for: data exchange
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26374

Search results for: data exchange

24694 Regression for Doubly Inflated Multivariate Poisson Distributions

Authors: Ishapathik Das, Sumen Sen, N. Rao Chaganty, Pooja Sengupta

Abstract:

Dependent multivariate count data occur in several research studies. These data can be modeled by a multivariate Poisson or Negative binomial distribution constructed using copulas. However, when some of the counts are inflated, that is, the number of observations in some cells are much larger than other cells, then the copula based multivariate Poisson (or Negative binomial) distribution may not fit well and it is not an appropriate statistical model for the data. There is a need to modify or adjust the multivariate distribution to account for the inflated frequencies. In this article, we consider the situation where the frequencies of two cells are higher compared to the other cells, and develop a doubly inflated multivariate Poisson distribution function using multivariate Gaussian copula. We also discuss procedures for regression on covariates for the doubly inflated multivariate count data. For illustrating the proposed methodologies, we present a real data containing bivariate count observations with inflations in two cells. Several models and linear predictors with log link functions are considered, and we discuss maximum likelihood estimation to estimate unknown parameters of the models.

Keywords: copula, Gaussian copula, multivariate distributions, inflated distributios

Procedia PDF Downloads 161
24693 An Exploratory Research of Human Character Analysis Based on Smart Watch Data: Distinguish the Drinking State from Normal State

Authors: Lu Zhao, Yanrong Kang, Lili Guo, Yuan Long, Guidong Xing

Abstract:

Smart watches, as a handy device with rich functionality, has become one of the most popular wearable devices all over the world. Among the various function, the most basic is health monitoring. The monitoring data can be provided as an effective evidence or a clue for the detection of crime cases. For instance, the step counting data can help to determine whether the watch wearer was quiet or moving during the given time period. There is, however, still quite few research on the analysis of human character based on these data. The purpose of this research is to analyze the health monitoring data to distinguish the drinking state from normal state. The analysis result may play a role in cases involving drinking, such as drunk driving. The experiment mainly focused on finding the figures of smart watch health monitoring data that change with drinking and figuring up the change scope. The chosen subjects are mostly in their 20s, each of whom had been wearing the same smart watch for a week. Each subject drank for several times during the week, and noted down the begin and end time point of the drinking. The researcher, then, extracted and analyzed the health monitoring data from the watch. According to the descriptive statistics analysis, it can be found that the heart rate change when drinking. The average heart rate is about 10% higher than normal, the coefficient of variation is less than about 30% of the normal state. Though more research is needed to be carried out, this experiment and analysis provide a thought of the application of the data from smart watches.

Keywords: character analysis, descriptive statistics analysis, drink state, heart rate, smart watch

Procedia PDF Downloads 169
24692 Software User Experience Enhancement through Collaborative Design

Authors: Shan Wang, Fahad Alhathal, Daniel Hobson

Abstract:

User-centered design skills play an important role in crafting a positive and intuitive user experience for software applications. Embracing a user-centric design approach involves understanding the needs, preferences, and behaviors of the end-users throughout the design process. This mindset not only enhances the usability of the software but also fosters a deeper connection between the digital product and its users. This paper encompasses a 6-month knowledge exchange collaboration project between an academic institution and an external industry in 2023, aims to improve the user experience of a digital platform utilized for a knowledge management tool, to understand users' preferences for features, identify sources of frustration, and pinpoint areas for enhancement. This research conducted one of the most effective methods to implement user-centered design through co-design workshops for testing user onboarding experiences that involve the active participation of users in the design process. More specifically, in January 2023, we organized eight workshops with a diverse group of 11 individuals. Throughout these sessions, we accumulated a total of 11 hours of qualitative data in both video and audio formats. Subsequently, we conducted an analysis of user journeys, identifying common issues and potential areas for improvement. This analysis was pivotal in guiding the knowledge management software in prioritizing feature enhancements and design improvements. Employing a user-centered design thinking process, we developed a series of graphic design solutions in collaboration with the software management tool company. These solutions were targeted at refining onboarding user experiences, workplace interfaces, and interactive design. Some of these design solutions were translated into tangible interfaces for the knowledge management tool. By actively involving users in the design process and valuing their input, developers can create products that are not only functional but also resonate with the end-users, ultimately leading to greater success in the competitive software landscape. In conclusion, this paper not only contributes insights into designing onboarding user experiences for software within a co-design approach but also presents key theories on leveraging the user-centered design process in software design to enhance overall user experiences.

Keywords: user experiences, co-design, design process, knowledge management tool, user-centered design

Procedia PDF Downloads 71
24691 Importance of Hospitality In Tourism Industry

Authors: S M Abdus Sattar

Abstract:

Introduction: The tourism industry is a vital component of economies, providing opportunities for economic growth and cultural exchange. At the heart of this industry lies the concept of hospitality. Tourism refers to the activity of traveling for leisure or business and hospitality refers to the welcoming, amenities and providing of services to guests in the travel and accommodation industries. Tourism is one of the fastest growing industries in the world today. Objectives: The most important objective of Tourism and Hospitality study are: To assess different aspects, To identify the reasons, To analyze the contribution in GDP of Bangladesh, To identify importances of hospitality, To identify challenges, To Development of leadership characteristics, communication, teamwork skill, customer service and problem-solving, To provide welcoming treatment to guests, offering accommodation, food, transportation and entertainment services to ensure guests feel safe and comfortable away from home, To explore future prospects in Bangladesh and To suggests some recommendations for development of these sector. Methodology: Statistical method has been adopted in this study. Common characteristics of the people of particular area are found out. Tourism data is collected through various methods, such as surveys, interviews, visitor registration, travel agency records, hotel bookings, transport ticketing systems, online platforms, social media, Bangladesh Tourism Corporation, World Travel and Tourism Council, Quantitative and qualitative research methods are used while collecting and analyzing data. Findings: Tourism and Hospitality focuses on marketing, management, attractions, recreation events, travel related services, lodging, operations of restaurants and food services. Tourism offers great opportunities for emerging economies and developing countries. It creates jobs, strengthens the local economy, contributes to local infrastructure development, can help to conserve the natural environment, cultural assets, traditions, reduce poverty and inequality. The hospitality industry contributes to the economy of a country by employing its human resources. It generates new employment, contributing to the Gross Domestic Product (GDP) of a country. Around 330 million people were employed in the Tourism and Hospitality sector in globally. Tourism and Hospitality industry is creating high tax revenues. Tourism is a rising industry in Bangladesh. Studying hospitality can also help develop a range of essential skills that are valuable in any industry. Conclusion: As the conclusion, tourism industry is focused on providing quality attractions and events in order to entice tourists to come. The hospitality industry provides the good service for client. Hospitality and Tourism are closely related. Hospitality built up the relationship between host and guest. The importance of hospitality in tourism industry is immense. The Tourism and Hospitality industry is an important contributor to Bangladesh's economy. It is necessary to develop the Tourism infrastructure, maintain tourist destinations, railway stations, airports, rest house, hotels and improve the quality of services.

Keywords: tourism, hospitality, GDP, employment, economy

Procedia PDF Downloads 33
24690 Role of Tourism and Hospitality Industry in economic Development

Authors: S. M. Abdus Sattar

Abstract:

Introduction: The objectives of the study are to assess different aspects of the tourism and hospitality industry, analyze its contributions to the Gross Domestic Product of Bangladesh, identify the importance of the tourism and hospitality industry, explore future prospects in the sectors, identify challenges and provide recommendations for the development of these industries. The study explores the significance of the tourism and hospitality industry in economic growth and defines its role. Tourism is one of the fastest-growing industries in the world today. Methodology: The study adopts statistical methods and utilizes both quantitative and qualitative research techniques. Data is collected through surveys, interviews, visitor registration, online platforms and analysis of various tourism-related records. The study focuses on marketing, management, attractions and services in the tourism and hospitality sectors. Result: The tourism and hospitality industry offers great opportunities for emerging economies and developing countries. The industry provides job creation, infrastructure development, cultural assets and environmental conservation, essential skills development, revenue generated, foreign exchange earned, economic growth and reduced poverty and inequality. Discussion: The study focuses on improving infrastructure and service quality in the tourism and hospitality industry to attract tourists. The industry significantly contributes to the Gross Domestic Product of Bangladesh. It highlights how the tourism and hospitality sectors can drive economic development, reduce poverty and promote cultural and environmental conservation. It also explores the challenges and future prospects in the tourism and hospitality sectors. Conclusion and Future Scope: The opportunities for tourism of Bangladesh are agricultural tourism, religious tourism, sports tourism, eco-tourism, educational tourism, rural tourism and cultural tourism. However, there is a lack of research and plans to explore the development of the industry. The tourism and hospitality industry offers numerous opportunities for growth and development. There are job opportunities for travel consultants, tour operators, event planners, hotel managers, travel writers, tourism development officers and airline executives in the future. The study recommends to development of tourism infrastructure, maintaining tourist destinations, railway stations, airports, rest houses, hotels and improving the quality of services.

Keywords: tourism, hospitality, employment, economic, development

Procedia PDF Downloads 35
24689 Geochemical Characteristics and Chemical Toxicity: Appraisal of Groundwater Uranium With Other Geogenic Contaminants in Various Districts of Punjab, India

Authors: Tanu Sharma, Bikramjit Singh Bajwa, Inderpreet Kaur

Abstract:

Monitoring of groundwater in Tarn-Taran, Bathinda, Faridkot and Mansa districts of Punjab state, India is essential where this freshwater resource is being over-exploited causing quality deterioration, groundwater depletion and posing serious threats to residents. The present integrated study was done to appraise quality and suitability of groundwater for drinking/irrigation purposes, hydro-geochemical characteristics, source identification and associated health risks. In the present study, groundwater of various districts of Punjab state was found to be heavily contaminated with As followed by U, thus posing high cancerous risks to local residents via ingestion, along with minor contamination of Fe, Mn, Pb and F−. Most health concerns in the study region were due to the elevated concentrations of arsenic in groundwater with average values of 130 µg L-1, 176 µg L-1, 272 µg L-1 and 651 µg L-1 in Tarn-Taran, Bathinda, Faridkot and Mansa districts, respectively, which is quite high as compared to the safe limit as recommended by BIS i.e. 10 µg L-1. In Tarn-Taran, Bathinda, Faridkot and Mansa districts, average uranium contents were found to be 37 µg L-1, 88 µg L-1, 61 µg L-1 and 104 µg L-1, with 51 %, 74 %, 61 % and 71 % samples, respectively, being above the WHO limit of 30 µg L-1 in groundwater. Further, the quality indices showed that groundwater of study region is suited for irrigation but not appropriate for drinking purposes. Hydro-geochemical studies revealed that most of the collected groundwater samples belonged to Ca2+ - Mg2+ - HCO3- type showing dominance of MgCO3 type which indicates the presence of temporary hardness in groundwater. Rock-water reactions and reverse ion exchange were the predominant factors for controlling hydro-geochemistry in the study region. Dissolution of silicate minerals caused the dominance of Na+ ions in the aquifers of study region. Multivariate statistics revealed that along with geogenic sources, contribution of anthropogenic activities such as injudicious application of agrochemicals and domestic waste discharge was also very significant. The results obtained abolished the myth that uranium is only root cause for large number of cancer patients in study region as arsenic and mercury were also present in groundwater at levels that were of health concern to groundwater.

Keywords: uranium, trace elements, multivariate data analysis, risk assessment

Procedia PDF Downloads 75
24688 An Approach to Practical Determination of Fair Premium Rates in Crop Hail Insurance Using Short-Term Insurance Data

Authors: Necati Içer

Abstract:

Crop-hail insurance plays a vital role in managing risks and reducing the financial consequences of hail damage on crop production. Predicting insurance premium rates with short-term data is a major difficulty in numerous nations because of the unique characteristics of hailstorms. This study aims to suggest a feasible approach for establishing equitable premium rates in crop-hail insurance for nations with short-term insurance data. The primary goal of the rate-making process is to determine premium rates for high and zero loss costs of villages and enhance their credibility. To do this, a technique was created using the author's practical knowledge of crop-hail insurance. With this approach, the rate-making method was developed using a range of temporal and spatial factor combinations with both hypothetical and real data, including extreme cases. This article aims to show how to incorporate the temporal and spatial elements into determining fair premium rates using short-term insurance data. The article ends with a suggestion on the ultimate premium rates for insurance contracts.

Keywords: crop-hail insurance, premium rate, short-term insurance data, spatial and temporal parameters

Procedia PDF Downloads 59
24687 Cultural Traditions Petik Laut and Onjem in Gili Island, Indonesia That Potential as Ecotourism to Bring Indonesia's Culture to the World

Authors: Dwi Yulian Fahruddin Shah, Mochammad Luthfy Rizaldy Dwi Putra, Tommy Adi Rachmawan, Mona Annisa Matondang, Nadya Sylvia, Hilmy Ramzy Rinaldy

Abstract:

Gili island is one of the island in Indonesia which is located in Probolinggo city, East Java. Gili Island has some potential culture as local wisdom that can be used as tourism commodity because it can be used as attractive ecotourism. With the ecotourism that utilize local wisdom of Indonesian’s culture that located in Gili can introduce the richness of Indonesian culture in the world that will increase foreign exchange. One of the cultural potential as local wisdom in Gili island are Petik Laut and Onjem. It are a culture in Gili island that can’t be found in other island in Indonesia. Not just that but also it are a cultural identity that is owned by Gili island which has fill the criteria to be used as local wisdom that can be used as ecotourism that can bring Indonesian culture to the world so that the tourists of the world will visit to Indonesia, especially to Gili island to see Petik Laut and Onjem culture directly.

Keywords: Gili island, petik laut and onjem culture, ecotourism, indonesia’s culture

Procedia PDF Downloads 564
24686 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa

Authors: Samy A. Khalil, U. Ali Rahoma

Abstract:

The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.

Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa

Procedia PDF Downloads 105
24685 Algorithm Optimization to Sort in Parallel by Decreasing the Number of the Processors in SIMD (Single Instruction Multiple Data) Systems

Authors: Ali Hosseini

Abstract:

Paralleling is a mechanism to decrease the time necessary to execute the programs. Sorting is one of the important operations to be used in different systems in a way that the proper function of many algorithms and operations depend on sorted data. CRCW_SORT algorithm executes ‘N’ elements sorting in O(1) time on SIMD (Single Instruction Multiple Data) computers with n^2/2-n/2 number of processors. In this article having presented a mechanism by dividing the input string by the hinge element into two less strings the number of the processors to be used in sorting ‘N’ elements in O(1) time has decreased to n^2/8-n/4 in the best state; by this mechanism the best state is when the hinge element is the middle one and the worst state is when it is minimum. The findings from assessing the proposed algorithm by other methods on data collection and number of the processors indicate that the proposed algorithm uses less processors to sort during execution than other methods.

Keywords: CRCW, SIMD (Single Instruction Multiple Data) computers, parallel computers, number of the processors

Procedia PDF Downloads 314
24684 Increasing the System Availability of Data Centers by Using Virtualization Technologies

Authors: Chris Ewe, Naoum Jamous, Holger Schrödl

Abstract:

Like most entrepreneurs, data center operators pursue goals such as profit-maximization, improvement of the company’s reputation or basically to exist on the market. Part of those aims is to guarantee a given quality of service. Quality characteristics are specified in a contract called the service level agreement. Central part of this agreement is non-functional properties of an IT service. The system availability is one of the most important properties as it will be shown in this paper. To comply with availability requirements, data center operators can use virtualization technologies. A clear model to assess the effect of virtualization functions on the parts of a data center in relation to the system availability is still missing. This paper aims to introduce a basic model that shows these connections, and consider if the identified effects are positive or negative. Thus, this work also points out possible disadvantages of the technology. In consequence, the paper shows opportunities as well as risks of data center virtualization in relation to system availability.

Keywords: availability, cloud computing IT service, quality of service, service level agreement, virtualization

Procedia PDF Downloads 542
24683 Using Crowd-Sourced Data to Assess Safety in Developing Countries: The Case Study of Eastern Cairo, Egypt

Authors: Mahmoud Ahmed Farrag, Ali Zain Elabdeen Heikal, Mohamed Shawky Ahmed, Ahmed Osama Amer

Abstract:

Crowd-sourced data refers to data that is collected and shared by a large number of individuals or organizations, often through the use of digital technologies such as mobile devices and social media. The shortage in crash data collection in developing countries makes it difficult to fully understand and address road safety issues in these regions. In developing countries, crowd-sourced data can be a valuable tool for improving road safety, particularly in urban areas where the majority of road crashes occur. This study is -to our best knowledge- the first to develop safety performance functions using crowd-sourced data by adopting a negative binomial structure model and the Full Bayes model to investigate traffic safety for urban road networks and provide insights into the impact of roadway characteristics. Furthermore, as a part of the safety management process, network screening has been undergone through applying two different methods to rank the most hazardous road segments: PCR method (adopted in the Highway Capacity Manual HCM) as well as a graphical method using GIS tools to compare and validate. Lastly, recommendations were suggested for policymakers to ensure safer roads.

Keywords: crowdsourced data, road crashes, safety performance functions, Full Bayes models, network screening

Procedia PDF Downloads 62
24682 Review of Different Machine Learning Algorithms

Authors: Syed Romat Ali Shah, Bilal Shoaib, Saleem Akhtar, Munib Ahmad, Shahan Sadiqui

Abstract:

Classification is a data mining technique, which is recognizedon Machine Learning (ML) algorithm. It is used to classifythe individual articlein a knownofinformation into a set of predefinemodules or group. Web mining is also a portion of that sympathetic of data mining methods. The main purpose of this paper to analysis and compare the performance of Naïve Bayse Algorithm, Decision Tree, K-Nearest Neighbor (KNN), Artificial Neural Network (ANN)and Support Vector Machine (SVM). This paper consists of different ML algorithm and their advantages and disadvantages and also define research issues.

Keywords: Data Mining, Web Mining, classification, ML Algorithms

Procedia PDF Downloads 305
24681 Using Genetic Algorithms and Rough Set Based Fuzzy K-Modes to Improve Centroid Model Clustering Performance on Categorical Data

Authors: Rishabh Srivastav, Divyam Sharma

Abstract:

We propose an algorithm to cluster categorical data named as ‘Genetic algorithm initialized rough set based fuzzy K-Modes for categorical data’. We propose an amalgamation of the simple K-modes algorithm, the Rough and Fuzzy set based K-modes and the Genetic Algorithm to form a new algorithm,which we hypothesise, will provide better Centroid Model clustering results, than existing standard algorithms. In the proposed algorithm, the initialization and updation of modes is done by the use of genetic algorithms while the membership values are calculated using the rough set and fuzzy logic.

Keywords: categorical data, fuzzy logic, genetic algorithm, K modes clustering, rough sets

Procedia PDF Downloads 254
24680 Forecasting Amman Stock Market Data Using a Hybrid Method

Authors: Ahmad Awajan, Sadam Al Wadi

Abstract:

In this study, a hybrid method based on Empirical Mode Decomposition and Holt-Winter (EMD-HW) is used to forecast Amman stock market data. First, the data are decomposed by EMD method into Intrinsic Mode Functions (IMFs) and residual components. Then, all components are forecasted by HW technique. Finally, forecasting values are aggregated together to get the forecasting value of stock market data. Empirical results showed that the EMD- HW outperform individual forecasting models. The strength of this EMD-HW lies in its ability to forecast non-stationary and non- linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy comparing with eight existing forecasting methods based on the five forecast error measures.

Keywords: Holt-Winter method, empirical mode decomposition, forecasting, time series

Procedia PDF Downloads 134
24679 Microbial Fuel Cells in Waste Water Treatment and Electricity Generation

Authors: Rajalaxmi N., Padma Bhat, Pooja Garag, Pooja N. M., V. S. Hombalimath

Abstract:

Microbial fuel cell (MFC) is the advancement of science that aims at utilizing the oxidizing potential of bacteria for wastewater treatment and production of bio-hydrogen and bio-electricity. Salt-bridge is the economic alternative to highly priced proton-exchange membrane in the construction of a microbial fuel cell. This paper studies the electricity generating capacity of E.coli and Clostridium sporogenes in microbial fuel cells (MFCs). Unlike most of MFC research, this targets the long term goals of renewable energy production and wastewater treatment. In present study the feasibility and potential of bioelectricity production from different wastewater was observed. Different wastewater was primarily treated which were confirmed by the COD tests which showed reduction of COD. We observe that the electricity production of MFCs decreases almost linearly after 120 hrs. The sewage wastewater containing Clostridium sporogenes showed bioelectricity production up to 188mV with COD removal of 60.52%. Sewage wastewater efficiently produces bioelectricity and this also helpful to reduce wastewater pollution load.

Keywords: microbial fuel cell, bioelectricity, wastewater, salt bridge, COD

Procedia PDF Downloads 540
24678 Solvent Extraction in Ionic Liquids: Structuration and Aggregation Effects on Extraction Mechanisms

Authors: Sandrine Dourdain, Cesar Lopez, Tamir Sukhbaatar, Guilhem Arrachart, Stephane Pellet-Rostaing

Abstract:

A promising challenge in solvent extraction is to replace the conventional organic solvents, with ionic liquids (IL). Depending on the extraction systems, these new solvents show better efficiency than the conventional ones. Although some assumptions based on ions exchanges have been proposed in the literature, these properties are not predictable because the involved mechanisms are still poorly understood. It is well established that the mechanisms underlying solvent extraction processes are based not only on the molecular chelation of the extractant molecules but also on their ability to form supra-molecular aggregates due to their amphiphilic nature. It is therefore essential to evaluate how IL affects the aggregation properties of the extractant molecules. Our aim is to evaluate the influence of IL structure and polarity on solvent extraction mechanisms, by looking at the aggregation of the extractant molecules in IL. We compare extractant systems that are well characterized in common solvents and show thanks to SAXS and SANS measurements, that in the absence of IL ion exchange mechanisms, extraction properties are related to aggregation.

Keywords: solvent extraction in Ionic liquid, aggregation, Ionic liquids structure, SAXS, SANS

Procedia PDF Downloads 161
24677 Data Security and Privacy Challenges in Cloud Computing

Authors: Amir Rashid

Abstract:

Cloud Computing frameworks empower organizations to cut expenses by outsourcing computation resources on-request. As of now, customers of Cloud service providers have no methods for confirming the privacy and ownership of their information and data. To address this issue we propose the platform of a trusted cloud computing program (TCCP). TCCP empowers Infrastructure as a Service (IaaS) suppliers, for example, Amazon EC2 to give a shout box execution condition that ensures secret execution of visitor virtual machines. Also, it permits clients to bear witness to the IaaS supplier and decide if the administration is secure before they dispatch their virtual machines. This paper proposes a Trusted Cloud Computing Platform (TCCP) for guaranteeing the privacy and trustworthiness of computed data that are outsourced to IaaS service providers. The TCCP gives the deliberation of a shut box execution condition for a client's VM, ensuring that no cloud supplier's authorized manager can examine or mess up with its data. Furthermore, before launching the VM, the TCCP permits a client to dependably and remotely acknowledge that the provider at backend is running a confided in TCCP. This capacity extends the verification of whole administration, and hence permits a client to confirm the data operation in secure mode.

Keywords: cloud security, IaaS, cloud data privacy and integrity, hybrid cloud

Procedia PDF Downloads 301
24676 Optimization of Adsorption Performance of Lignocellulosic Waste Pretreatment and Chemical Modification

Authors: Bendjelloul Meriem, Elandaloussi El Hadj

Abstract:

In this work, we studied the effectiveness of a lignocellulosic waste (wood sawdust) for the removal of cadmium Cd (II) in aqueous solution. The adsorbent material SBO-CH2-CO2Na has been prepared by alkaline pretreatment of wood sawdust followed by a chemical modification with sodium salt of chloroacetic acid. The characterization of the as-prepared material by FTIR has proven that the grafting of acetate spacer took actually place in the lignocellulosic backbone by the appearance of characteristic band of carboxylic groups in the IR spectrum. The removal study of Cd2+ by SBO-CH2-CO2Na material at the solid-liquid interface was carried out by kinetics, sorption isotherms, effect of temperature and thermodynamic parameters were evaluated. The last part of this work was dedicated to assess the regenerability of the adsorbent material after three reuse cycles. The results indicate that SBO-CH2-CO2Na matrix possesses a high effectiveness in removing Cd (II) with an adsorption capacity of 222.22 mg/g, yet a better value that those of many low-cost adsorbents so far reported in the literature. The results found in the course of this study suggest that ionic exchange is the most appropriate mechanism involved in the removal of cadmium ions.

Keywords: adsorption, cadmium, isotherms, lignocellulosic, regenerability

Procedia PDF Downloads 335
24675 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record

Authors: Raghavi C. Janaswamy

Abstract:

In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.

Keywords: electronic health record, graph neural network, heterogeneous data, prediction

Procedia PDF Downloads 90
24674 Enabling and Ageing-Friendly Neighbourhoods: An Eye-Tracking Study of Multi-Sensory Experience of Senior Citizens in Singapore

Authors: Zdravko Trivic, Kelvin E. Y. Low, Darko Radovic, Raymond Lucas

Abstract:

Our understanding and experience of the built environment are primarily shaped by multi‐sensory, emotional and symbolic modes of exchange with spaces. Associated sensory and cognitive declines that come with ageing substantially affect the overall quality of life of the elderly citizens and the ways they perceive and use urban environment. Reduced mobility and increased risk of falls, problems with spatial orientation and communication, lower confidence and independence levels, decreased willingness to go out and social withdrawal are some of the major consequences of sensory declines that challenge almost all segments of the seniors’ everyday living. However, contemporary urban environments are often either sensory overwhelming or depleting, resulting in physical, mental and emotional stress. Moreover, the design and planning of housing neighbourhoods hardly go beyond the passive 'do-no-harm' and universal design principles, and the limited provision of often non-integrated eldercare and inter-generational facilities. This paper explores and discusses the largely neglected relationships between the 'hard' and 'soft' aspects of housing neighbourhoods and urban experience, focusing on seniors’ perception and multi-sensory experience as vehicles for design and planning of high-density housing neighbourhoods that are inclusive and empathetic yet build senior residents’ physical and mental abilities at different stages of ageing. The paper outlines methods and key findings from research conducted in two high-density housing neighbourhoods in Singapore with aims to capture and evaluate multi-sensorial qualities of two neighbourhoods from the perspective of senior residents. Research methods employed included: on-site sensory recordings of 'objective' quantitative sensory data (air temperature and humidity, sound level and luminance) using multi-function environment meter, spatial mapping of patterns of elderly users’ transient and stationary activity, socio-sensory perception surveys and sensorial journeys with local residents using eye-tracking glasses, and supplemented by walk-along or post-walk interviews. The paper develops a multi-sensory framework to synthetize, cross-reference, and visualise the activity and spatio-sensory rhythms and patterns and distill key issues pertinent to ageing-friendly and health-supportive neighbourhood design. Key findings show senior residents’ concerns with walkability, safety, and wayfinding, overall aesthetic qualities, cleanliness, smell, noise, and crowdedness in their neighbourhoods, as well as the lack of design support for all-day use in the context of Singaporean tropical climate and for inter-generational social interaction. The (ongoing) analysis of eye-tracking data reveals the spatial elements of senior residents’ look at and interact with the most frequently, with the visual range often directed towards the ground. With capacities to meaningfully combine quantitative and qualitative, measured and experienced sensory data, multi-sensory framework shows to be fruitful for distilling key design opportunities based on often ignored aspects of subjective and often taken-for-granted interactions with the familiar outdoor environment. It offers an alternative way of leveraging the potentials of housing neighbourhoods to take a more active role in enabling healthful living at all stages of ageing.

Keywords: ageing-friendly neighbourhoods, eye-tracking, high-density environment, multi-sensory approach, perception

Procedia PDF Downloads 160
24673 Vibrotactility: Exploring and Prototyping the Aesthetics and Technology of Vibrotactility

Authors: Elsa Kosmack Vaara, Cheryl Akner Koler, Yusuf Mulla, Parivash Ranjbar, Anneli Nöu

Abstract:

This transdisciplinary research weaves together an aesthetic perspective with a technical one to develop human sensitivity for vibration and construct flexible, wearable devices that are miniature, lightweight, and energy efficient. By applying methods from artistic research, performative arts, audio science, nanotechnology, and interaction design, we created working prototypes with actuators that were specifically positioned in various places on the body. The vibrotactile prototypes were tested by our research team, design students, and people with deafblindness and blindness, each with different intentions. Some tests supported connoisseurship for vibrotactile musical expression. Others aimed for precise navigational instructions. Our results and discussion concern problems in establishing standards for vibrotactility because standards minimize diversity and narrow possible ways vibration can be experienced. Human bodies vary significantly in ‘where’ vibrotactile signals can be sensed and ‘how’ they awaken emotions. We encourage others to embrace the dynamic exchange between new haptic technology and aesthetic complexity.

Keywords: aesthetics, vibration, music, interaction design, deafblindness

Procedia PDF Downloads 89
24672 Student Feedback and Its Impact on Fostering the Quality of Teaching at the Academia

Authors: S. Vanker, A. Aaver, A. Roio, L. Nuut

Abstract:

To be sure about the effective and less effective/ineffective approaches to course instruction, we hold the opinion that the faculty members need regular feedback from their students in order to be aware of how well or unwell their teaching styles have worked when instructing the courses. It can be confirmed without a slightest hesitation that undergraduate students’ motivated-ness can be sustained when continually improving the quality of teaching and properly sequencing the academic courses both, in the curricula and timetables. At Estonian Aviation Academy, four different forms of feedback are used: Lecture monitoring, questionnaires for all students, study information system subject monitoring and direct feedback received by the lecturer. Questionnaires for all students are arranged once during a study year and separately for the first year and senior students. The results are discussed in academic departments together with student representatives, analyzed with the teaching staff and, if needed, improvements are suggested. In addition, a monitoring system is planned where a lecturer acts in both roles – as an observer and as the lecturer. This will foster better exchange of experience and through this help to make the whole study process more interesting.

Keywords: learner motivation, feedback, student support, undergraduate education

Procedia PDF Downloads 322
24671 A Proposal to Tackle Security Challenges of Distributed Systems in the Healthcare Sector

Authors: Ang Chia Hong, Julian Khoo Xubin, Burra Venkata Durga Kumar

Abstract:

Distributed systems offer many benefits to the healthcare industry. From big data analysis to business intelligence, the increased computational power and efficiency from distributed systems serve as an invaluable resource in the healthcare sector to utilize. However, as the usage of these distributed systems increases, many issues arise. The main focus of this paper will be on security issues. Many security issues stem from distributed systems in the healthcare industry, particularly information security. The data of people is especially sensitive in the healthcare industry. If important information gets leaked (Eg. IC, credit card number, address, etc.), a person’s identity, financial status, and safety might get compromised. This results in the responsible organization losing a lot of money in compensating these people and even more resources expended trying to fix the fault. Therefore, a framework for a blockchain-based healthcare data management system for healthcare was proposed. In this framework, the usage of a blockchain network is explored to store the encryption key of the patient’s data. As for the actual data, it is encrypted and its encrypted data, called ciphertext, is stored in a cloud storage platform. Furthermore, there are some issues that have to be emphasized and tackled for future improvements, such as a multi-user scheme that could be proposed, authentication issues that have to be tackled or migrating the backend processes into the blockchain network. Due to the nature of blockchain technology, the data will be tamper-proof, and its read-only function can only be accessed by authorized users such as doctors and nurses. This guarantees the confidentiality and immutability of the patient’s data.

Keywords: distributed, healthcare, efficiency, security, blockchain, confidentiality and immutability

Procedia PDF Downloads 189
24670 Design and Implementation of a Geodatabase and WebGIS

Authors: Sajid Ali, Dietrich Schröder

Abstract:

The merging of internet and Web has created many disciplines and Web GIS is one these disciplines which is effectively dealing with the geospatial data in a proficient way. Web GIS technologies have provided an easy accessing and sharing of geospatial data over the internet. However, there is a single platform for easy and multiple accesses of the data lacks for the European Caribbean Association (Europaische Karibische Gesselschaft - EKG) to assist their members and other research community. The technique presented in this paper deals with designing of a geodatabase using PostgreSQL/PostGIS as an object oriented relational database management system (ORDBMS) for competent dissemination and management of spatial data and Web GIS by using OpenGeo Suite for the fast sharing and distribution of the data over the internet. The characteristics of the required design for the geodatabase have been studied and a specific methodology is given for the purpose of designing the Web GIS. At the end, validation of this Web based geodatabase has been performed over two Desktop GIS software and a web map application and it is also discussed that the contribution has all the desired modules to expedite further research in the area as per the requirements.

Keywords: desktop GISSoftware, European Caribbean association, geodatabase, OpenGeo suite, postgreSQL/PostGIS, webGIS, web map application

Procedia PDF Downloads 343
24669 Integration of “FAIR” Data Principles in Longitudinal Mental Health Research in Africa: Lessons from a Landscape Analysis

Authors: Bylhah Mugotitsa, Jim Todd, Agnes Kiragga, Jay Greenfield, Evans Omondi, Lukoye Atwoli, Reinpeter Momanyi

Abstract:

The INSPIRE network aims to build an open, ethical, sustainable, and FAIR (Findable, Accessible, Interoperable, Reusable) data science platform, particularly for longitudinal mental health (MH) data. While studies have been done at the clinical and population level, there still exists limitations in data and research in LMICs, which pose a risk of underrepresentation of mental disorders. It is vital to examine the existing longitudinal MH data, focusing on how FAIR datasets are. This landscape analysis aimed to provide both overall level of evidence of availability of longitudinal datasets and degree of consistency in longitudinal studies conducted. Utilizing prompters proved instrumental in streamlining the analysis process, facilitating access, crafting code snippets, categorization, and analysis of extensive data repositories related to depression, anxiety, and psychosis in Africa. While leveraging artificial intelligence (AI), we filtered through over 18,000 scientific papers spanning from 1970 to 2023. This AI-driven approach enabled the identification of 228 longitudinal research papers meeting inclusion criteria. Quality assurance revealed 10% incorrectly identified articles and 2 duplicates, underscoring the prevalence of longitudinal MH research in South Africa, focusing on depression. From the analysis, evaluating data and metadata adherence to FAIR principles remains crucial for enhancing accessibility and quality of MH research in Africa. While AI has the potential to enhance research processes, challenges such as privacy concerns and data security risks must be addressed. Ethical and equity considerations in data sharing and reuse are also vital. There’s need for collaborative efforts across disciplinary and national boundaries to improve the Findability and Accessibility of data. Current efforts should also focus on creating integrated data resources and tools to improve Interoperability and Reusability of MH data. Practical steps for researchers include careful study planning, data preservation, machine-actionable metadata, and promoting data reuse to advance science and improve equity. Metrics and recognition should be established to incentivize adherence to FAIR principles in MH research

Keywords: longitudinal mental health research, data sharing, fair data principles, Africa, landscape analysis

Procedia PDF Downloads 101
24668 Effect of Ti+ Irradiation on the Photoluminescence of TiO2 Nanofibers

Authors: L. Chetibi, D. Hamana, T. O. Busko, M. P. Kulish, S. Achour

Abstract:

TiO2 nanostructures have attracted much attention due to their optical, dielectric and photocatalytic properties as well as applications including optical coating, photocatalysis and photoelectrochemical solar cells. This work aims to prepare TiO2 nanofibers (NFs) on titanium substrate (Ti) by in situ oxidation of Ti foils in a mixture solution of concentrated H2O2 and NaOH followed by proton exchange and calcinations. Scanning Electron microscopy (SEM) revealed an obvious network of TiO2 nanofibers. The photoluminescence (PL) spectra of these nanostructures revealed a broad intense band in the visible light range with a reduced near edge band emission. The PL bands in the visible region, mainly, results from surface oxygen vacancies and others defects. After irradiation with Ti+ ions (the irradiation energy was E = 140 keV with doses of 1013 ions/cm2), the intensity of the PL spectrum decreased as a consequence of the radiation treatment. The irradiation with Ti+ leads to a reduction of defects and generation of non irradiative defects near to the level of the conduction band as evidenced by the PL results. On the other hand, reducing the surface defects on TiO2 nanostructures may improve photocatalytic and optoelectronic properties of this nanostructure.

Keywords: TiO2, nanofibers, photoluminescence, irradiation

Procedia PDF Downloads 248
24667 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 72
24666 Human-Centred Data Analysis Method for Future Design of Residential Spaces: Coliving Case Study

Authors: Alicia Regodon Puyalto, Alfonso Garcia-Santos

Abstract:

This article presents a method to analyze the use of indoor spaces based on data analytics obtained from inbuilt digital devices. The study uses the data generated by the in-place devices, such as smart locks, Wi-Fi routers, and electrical sensors, to gain additional insights on space occupancy, user behaviour, and comfort. Those devices, originally installed to facilitate remote operations, report data through the internet that the research uses to analyze information on human real-time use of spaces. Using an in-place Internet of Things (IoT) network enables a faster, more affordable, seamless, and scalable solution to analyze building interior spaces without incorporating external data collection systems such as sensors. The methodology is applied to a real case study of coliving, a residential building of 3000m², 7 floors, and 80 users in the centre of Madrid. The case study applies the method to classify IoT devices, assess, clean, and analyze collected data based on the analysis framework. The information is collected remotely, through the different platforms devices' platforms; the first step is to curate the data, understand what insights can be provided from each device according to the objectives of the study, this generates an analysis framework to be escalated for future building assessment even beyond the residential sector. The method will adjust the parameters to be analyzed tailored to the dataset available in the IoT of each building. The research demonstrates how human-centered data analytics can improve the future spatial design of indoor spaces.

Keywords: in-place devices, IoT, human-centred data-analytics, spatial design

Procedia PDF Downloads 201
24665 A Unique Multi-Class Support Vector Machine Algorithm Using MapReduce

Authors: Aditi Viswanathan, Shree Ranjani, Aruna Govada

Abstract:

With data sizes constantly expanding, and with classical machine learning algorithms that analyze such data requiring larger and larger amounts of computation time and storage space, the need to distribute computation and memory requirements among several computers has become apparent. Although substantial work has been done in developing distributed binary SVM algorithms and multi-class SVM algorithms individually, the field of multi-class distributed SVMs remains largely unexplored. This research seeks to develop an algorithm that implements the Support Vector Machine over a multi-class data set and is efficient in a distributed environment. For this, we recursively choose the best binary split of a set of classes using a greedy technique. Much like the divide and conquer approach. Our algorithm has shown better computation time during the testing phase than the traditional sequential SVM methods (One vs. One, One vs. Rest) and out-performs them as the size of the data set grows. This approach also classifies the data with higher accuracy than the traditional multi-class algorithms.

Keywords: distributed algorithm, MapReduce, multi-class, support vector machine

Procedia PDF Downloads 405