Search results for: linked data
25397 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities
Authors: Salman Naseer
Abstract:
One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission
Procedia PDF Downloads 14625396 Analysis of the Feasibility of Using a Solar Spiral Type Water Heater for Swimming Pool Application in Physiotherapy and Sports Centers
Authors: G. B. M. Carvalho, V. A. C. Vale, E. T. L. Cöuras Ford
Abstract:
A heated pool makes it possible to use it during all hours of the day and in the seasons, especially in physiotherapies and sports centers. However, the cost of installation, operation and maintenance often makes it difficult to deploy. In addition, the current global policy for the use of natural resources from energy sources contradicts the most common means of heating swimming pools, such as the use of gas (Natural Gas and Liquefied Petroleum Gas), the use of firewood or oil and the use of electricity (heat pumps and electrical resistances). In this sense, this work focuses on the use of solar water heaters to be used in swimming pools of physiotherapy centers, in order to analyze their viability for this purpose in view of the costs linked to the medium and/or long term heating. For this, materials of low cost, low weight, easy commercial acquisition were used besides easy manufacture. Parameters such as flow, temperature distribution, efficiency and technical-economic feasibility were evaluated.Keywords: heating, water, pool, solar energy, solar collectors, temperature, efficiency
Procedia PDF Downloads 16925395 Directivity in the Dramatherapeutic Process for People with Addictive Behaviour
Authors: Jakub Vávra, Milan Valenta, Petr Kosek
Abstract:
This article presents a perspective on the conduct of the dramatherapy process with persons with addictive behaviours with regard to the directiveness of the process. Although drama therapy as one of the creative arts approaches is rather non-directive in nature, depending on the clientele, there may be a need to structure the process more and, depending on the needs of the clients, to guide the process more directive. The specificity for people with addictive behaviours is discussed through the prism of the dramatherapeutic perspective, where we can find both a psychotherapeutic component as well as a component touching on expression and art, which is rather non-directive in nature. Within the context of practice with clients, this theme has repeatedly emerged and dramatherapists themselves have sought to find ways of coping with clients' demands and needs for structure and guidance within the dramatherapy process. Some of the outcomes from the supervision work also guided the research. Based on this insight, the research questions were approached. The first research question asks: in what ways is directive in dramatherapy manifested and manifested in the process? The second research question then complements the first and asks: to which phenomena are directivity in dramatherapy linked? In relation to the research questions, data were collected using focus groups and field notes. The qualitative approach of Content analysis and Relational analysis was chosen as the methodology. For analyzing qualitative research, we chose an Inductive coding scheme: Open coding, Axial coding, Pattern matching, Member checking, and Creating a coding scheme. In the presented partial research results, we find recurrent schemes related to directive coding in drama therapy. As an important element, directive leadership emerges in connection with safety for the client group, then in connection with the clients' order and also the department of the facility, and last but not least, to the personality of the drama therapist. By careful analysis and looking for patterns in the research results, we can see connections that are impossible to interpret at this stage but already provide clues to our understanding of the topic and open up further avenues for research in this area.Keywords: dramatherapy, directivity, personal approach, aims of dramatherapy process, safetyness
Procedia PDF Downloads 7125394 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm
Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy
Abstract:
IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.Keywords: IoT, fog networks, data stewardship, dynamic access policy
Procedia PDF Downloads 6225393 An Automated Approach to Consolidate Galileo System Availability
Authors: Marie Bieber, Fabrice Cosson, Olivier Schmitt
Abstract:
Europe's Global Navigation Satellite System, Galileo, provides worldwide positioning and navigation services. The satellites in space are only one part of the Galileo system. An extensive ground infrastructure is essential to oversee the satellites and ensure accurate navigation signals. High reliability and availability of the entire Galileo system are crucial to continuously provide positioning information of high quality to users. Outages are tracked, and operational availability is regularly assessed. A highly flexible and adaptive tool has been developed to automate the Galileo system availability analysis. Not only does it enable a quick availability consolidation, but it also provides first steps towards improving the data quality of maintenance tickets used for the analysis. This includes data import and data preparation, with a focus on processing strings used for classification and identifying faulty data. Furthermore, the tool allows to handle a low amount of data, which is a major constraint when the aim is to provide accurate statistics.Keywords: availability, data quality, system performance, Galileo, aerospace
Procedia PDF Downloads 17025392 Role of Physical Appearance in Associating People with a Group Identity
Authors: Gurleen Kaur
Abstract:
Being tall-short, fat-thin, black-white, etc. is an inevitable part of how people perceive you. This association of people with your external appearance carves out an identity for you. This paper will look at the reasons why people relate a person to a particular categorization on the basis of his/her physical appearance. The paper delves into reasons for this categorization into groups: Subconscious grouping, personal gain, ease of relating to the group, and social acceptance. Development of certain unique physical features also leads to a person relating himself to a collective identity. Thus, this paper will support the fact that physical appearance plays a crucial role in categorization of people into groups and hence forming a group identity for them. This paper is divided into three parts. The first part will discuss what physical appearance is and how is it linked to our daily lives. The second part will talk about why it works i.e. why this factor of external appearance is important in formation of identity. The last part will talk about the factors which lead to categorization of identity because of physical appearance.Keywords: group identity, physical appearance, subconscious grouping, collective identity
Procedia PDF Downloads 42025391 Exposure to Tactile Cues Does Not Influence Spatial Navigation in 129 S1/SvLm Mice
Authors: Rubaiyea Uddin, Rebecca Taylor, Emily Levesque
Abstract:
The hippocampus, located in the limbic system, is most commonly known for its role in memory and spatial navigation (as cited in Brain Reward and Pathways). It maintains an especially important role in specifically episodic and declarative memory. The hippocampus has also recently been linked to dopamine, the reward pathway’s primary neurotransmitter. Since research has found that dopamine also contributes to memory consolidation and hippocampal plasticity, this neurotransmitter is potentially responsible for contributing to the hippocampus’s role in memory formation. In this experiment we tested to see the effect of tactile cues on spatial navigation for eight different mice. We used a radial arm that had one designated “reward” arm containing sucrose. The presence or absence of bedding was our tactile cue. We attempted to see if the memory of that cue would enhance the mice’s memory of having received the reward in that arm. The results from our study showed there was no significant response from the use of tactile cues on spatial navigation on our 129 mice. Tactile cues therefore do not influence spatial navigation.Keywords: mice, radial arm maze, memory, spatial navigation, tactile cues, hippocampus, reward, sensory skills, Alzheimer's, neuro-degenerative diseases
Procedia PDF Downloads 69125390 Use of In-line Data Analytics and Empirical Model for Early Fault Detection
Authors: Hyun-Woo Cho
Abstract:
Automatic process monitoring schemes are designed to give early warnings for unusual process events or abnormalities as soon as possible. For this end, various techniques have been developed and utilized in various industrial processes. It includes multivariate statistical methods, representation skills in reduced spaces, kernel-based nonlinear techniques, etc. This work presents a nonlinear empirical monitoring scheme for batch type production processes with incomplete process measurement data. While normal operation data are easy to get, unusual fault data occurs infrequently and thus are difficult to collect. In this work, noise filtering steps are added in order to enhance monitoring performance by eliminating irrelevant information of the data. The performance of the monitoring scheme was demonstrated using batch process data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.Keywords: batch process, monitoring, measurement, kernel method
Procedia PDF Downloads 32425389 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method
Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez
Abstract:
Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics
Procedia PDF Downloads 9625388 Rollet vs Rocket: A New in-Space Propulsion Concept
Authors: Arthur Baraov
Abstract:
Nearly all rocket and spacecraft propulsion concepts in existence today can be linked one way or the other to one of the two ancient warfare devices: the gun and the sling. Chemical, thermoelectric, ion, nuclear thermal and electromagnetic rocket engines – all fall into the first group which, for obvious reasons, can be categorized as “hot” space propulsion concepts. Space elevator, orbital tower, rolling satellite, orbital skyhook, tether propulsion and gravitational assist – are examples of the second category which lends itself for the title “cold” space propulsion concepts. The “hot” space propulsion concepts skyrocketed – literally and figuratively – from the naïve ideas of Jules Verne to the manned missions to the Moon. On the other hand, with the notable exception of gravitational assist, hardly any of the “cold” space propulsion concepts made any progress in terms of practical application. Why is that? This article aims to show that the right answer to this question has the potential comparable by its implications and practical consequences to that of transition from Jules Verne’s stillborn and impractical conceptions of space flight to cogent and highly fertile ideas of Konstantin Tsiolkovsky and Yuri Kondratyuk.Keywords: propulsion, rocket, rollet, spacecraft
Procedia PDF Downloads 54025387 The Impact of the General Data Protection Regulation on Human Resources Management in Schools
Authors: Alexandra Aslanidou
Abstract:
The General Data Protection Regulation (GDPR), concerning the protection of natural persons within the European Union with regard to the processing of personal data and on the free movement of such data, became applicable in the European Union (EU) on 25 May 2018 and transformed the way personal data were being treated under the Data Protection Directive (DPD) regime, generating sweeping organizational changes to both public sector and business. A social practice that is considerably influenced in the way of its day-to-day operations is Human Resource (HR) management, for which the importance of GDPR cannot be underestimated. That is because HR processes personal data coming in all shapes and sizes from many different systems and sources. The significance of the proper functioning of an HR department, specifically in human-centered, service-oriented environments such as the education field, is decisive due to the fact that HR operations in schools, conducted effectively, determine the quality of the provided services and consequently have a considerable impact on the success of the educational system. The purpose of this paper is to analyze the decisive role that GDPR plays in HR departments that operate in schools and in order to practically evaluate the aftermath of the Regulation during the first months of its applicability; a comparative use cases analysis in five highly dynamic schools, across three EU Member States, was attempted.Keywords: general data protection regulation, human resource management, educational system
Procedia PDF Downloads 10225386 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query
Procedia PDF Downloads 15925385 A Hybrid Data-Handler Module Based Approach for Prioritization in Quality Function Deployment
Authors: P. Venu, Joeju M. Issac
Abstract:
Quality Function Deployment (QFD) is a systematic technique that creates a platform where the customer responses can be positively converted to design attributes. The accuracy of a QFD process heavily depends on the data that it is handling which is captured from customers or QFD team members. Customized computer programs that perform Quality Function Deployment within a stipulated time have been used by various companies across the globe. These programs heavily rely on storage and retrieval of the data on a common database. This database must act as a perfect source with minimum missing values or error values in order perform actual prioritization. This paper introduces a missing/error data handler module which uses Genetic Algorithm and Fuzzy numbers. The prioritization of customer requirements of sesame oil is illustrated and a comparison is made between proposed data handler module-based deployment and manual deployment.Keywords: hybrid data handler, QFD, prioritization, module-based deployment
Procedia PDF Downloads 29825384 Predicting Groundwater Areas Using Data Mining Techniques: Groundwater in Jordan as Case Study
Authors: Faisal Aburub, Wael Hadi
Abstract:
Data mining is the process of extracting useful or hidden information from a large database. Extracted information can be used to discover relationships among features, where data objects are grouped according to logical relationships; or to predict unseen objects to one of the predefined groups. In this paper, we aim to investigate four well-known data mining algorithms in order to predict groundwater areas in Jordan. These algorithms are Support Vector Machines (SVMs), Naïve Bayes (NB), K-Nearest Neighbor (kNN) and Classification Based on Association Rule (CBA). The experimental results indicate that the SVMs algorithm outperformed other algorithms in terms of classification accuracy, precision and F1 evaluation measures using the datasets of groundwater areas that were collected from Jordanian Ministry of Water and Irrigation.Keywords: classification, data mining, evaluation measures, groundwater
Procedia PDF Downloads 28225383 Jurisdictional Issues between Competition Law and Data Protection Law in Protection of Privacy of Online Consumers
Authors: Pankhudi Khandelwal
Abstract:
The revenue models of digital giants such as Facebook and Google, use targeted advertising for revenues. Such a model requires huge amounts of consumer data. While the data protection law deals with the protection of personal data, however, this data is acquired by the companies on the basis of consent, performance of a contract, or legitimate interests. This paper analyses the role that competition law can play in evading these loopholes for the protection of data and privacy of online consumers. Digital markets have certain distinctive features such as network effects and feedback loop, which gives incumbents of these markets a first-mover advantage. This creates a situation where the winner takes it all, thus creating entry barriers and concentration in the market. It has been also seen that this dominant position is then used by the undertakings for leveraging in other markets. This can be harmful to the consumers in form of less privacy, less choice, and stifling innovation, as seen in the cases of Facebook Cambridge Analytica, Google Shopping, and Google Android. Therefore, the article aims to provide a legal framework wherein the data protection law and competition law can come together to provide a balance in regulating digital markets. The issue has become more relevant in light of the Facebook decision by German competition authority, where it was held that Facebook had abused its dominant position by not complying with data protection rules, which constituted an exploitative practice. The paper looks into the jurisdictional boundaries that the data protection and competition authorities can work from and suggests ex ante regulation through data protection law and ex post regulation through competition law. It further suggests a change in the consumer welfare standard where harm to privacy should be considered as an indicator of low quality.Keywords: data protection, dominance, ex ante regulation, ex post regulation
Procedia PDF Downloads 18625382 Application of Knowledge Discovery in Database Techniques in Cost Overruns of Construction Projects
Authors: Mai Ghazal, Ahmed Hammad
Abstract:
Cost overruns in construction projects are considered as worldwide challenges since the cost performance is one of the main measures of success along with schedule performance. To overcome this problem, studies were conducted to investigate the cost overruns' factors, also projects' historical data were analyzed to extract new and useful knowledge from it. This research is studying and analyzing the effect of some factors causing cost overruns using the historical data from completed construction projects. Then, using these factors to estimate the probability of cost overrun occurrence and predict its percentage for future projects. First, an intensive literature review was done to study all the factors that cause cost overrun in construction projects, then another review was done for previous researcher papers about mining process in dealing with cost overruns. Second, a proposed data warehouse was structured which can be used by organizations to store their future data in a well-organized way so it can be easily analyzed later. Third twelve quantitative factors which their data are frequently available at construction projects were selected to be the analyzed factors and suggested predictors for the proposed model.Keywords: construction management, construction projects, cost overrun, cost performance, data mining, data warehousing, knowledge discovery, knowledge management
Procedia PDF Downloads 37525381 Sampling Error and Its Implication for Capture Fisheries Management in Ghana
Authors: Temiloluwa J. Akinyemi, Denis W. Aheto, Wisdom Akpalu
Abstract:
Capture fisheries in developing countries provide significant animal protein and directly supports the livelihoods of several communities. However, the misperception of biophysical dynamics owing to a lack of adequate scientific data has contributed to the suboptimal management in marine capture fisheries. This is because yield and catch potentials are sensitive to the quality of catch and effort data. Yet, studies on fisheries data collection practices in developing countries are hard to find. This study investigates the data collection methods utilized by fisheries technical officers within the four fishing regions of Ghana. We found that the officers employed data collection and sampling procedures which were not consistent with the technical guidelines curated by FAO. For example, 50 instead of 166 landing sites were sampled, while 290 instead of 372 canoes were sampled. We argue that such sampling errors could result in the over-capitalization of capture fish stocks and significant losses in resource rents.Keywords: Fisheries data quality, fisheries management, Ghana, Sustainable Fisheries
Procedia PDF Downloads 9725380 Improvement of Data Transfer over Simple Object Access Protocol (SOAP)
Authors: Khaled Ahmed Kadouh, Kamal Ali Albashiri
Abstract:
This paper presents a designed algorithm involves improvement of transferring data over Simple Object Access Protocol (SOAP). The aim of this work is to establish whether using SOAP in exchanging XML messages has any added advantages or not. The results showed that XML messages without SOAP take longer time and consume more memory, especially with binary data.Keywords: JAX-WS, SMTP, SOAP, web service, XML
Procedia PDF Downloads 49725379 Enhancing Healthcare Data Protection and Security
Authors: Joseph Udofia, Isaac Olufadewa
Abstract:
Everyday, the size of Electronic Health Records data keeps increasing as new patients visit health practitioner and returning patients fulfil their appointments. As these data grow, so is their susceptibility to cyber-attacks from criminals waiting to exploit this data. In the US, the damages for cyberattacks were estimated at $8 billion (2018), $11.5 billion (2019) and $20 billion (2021). These attacks usually involve the exposure of PII. Health data is considered PII, and its exposure carry significant impact. To this end, an enhancement of Health Policy and Standards in relation to data security, especially among patients and their clinical providers, is critical to ensure ethical practices, confidentiality, and trust in the healthcare system. As Clinical accelerators and applications that contain user data are used, it is expedient to have a review and revamp of policies like the Payment Card Industry Data Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), the Fast Healthcare Interoperability Resources (FHIR), all aimed to ensure data protection and security in healthcare. FHIR caters for healthcare data interoperability, FHIR caters to healthcare data interoperability, as data is being shared across different systems from customers to health insurance and care providers. The astronomical cost of implementation has deterred players in the space from ensuring compliance, leading to susceptibility to data exfiltration and data loss on the security accuracy of protected health information (PHI). Though HIPAA hones in on the security accuracy of protected health information (PHI) and PCI DSS on the security of payment card data, they intersect with the shared goal of protecting sensitive information in line with industry standards. With advancements in tech and the emergence of new technology, it is necessary to revamp these policies to address the complexity and ambiguity, cost barrier, and ever-increasing threats in cyberspace. Healthcare data in the wrong hands is a recipe for disaster, and we must enhance its protection and security to protect the mental health of the current and future generations.Keywords: cloud security, healthcare, cybersecurity, policy and standard
Procedia PDF Downloads 9425378 Channels Splitting Strategy for Optical Local Area Networks of Passive Star Topology
Authors: Peristera Baziana
Abstract:
In this paper, we present a network configuration for a WDM LANs of passive star topology that assume that the set of data WDM channels is split into two separate sets of channels, with different access rights over them. Especially, a synchronous transmission WDMA access algorithm is adopted in order to increase the probability of successful transmission over the data channels and consequently to reduce the probability of data packets transmission cancellation in order to avoid the data channels collisions. Thus, a control pre-transmission access scheme is followed over a separate control channel. An analytical Markovian model is studied and the average throughput is mathematically derived. The performance is studied for several numbers of data channels and various values of control phase duration.Keywords: access algorithm, channels division, collisions avoidance, wavelength division multiplexing
Procedia PDF Downloads 29925377 Comparative study of the technical efficiency of the cotton farms in the towns of Banikoara and Savalou
Authors: Boukari Abdou Wakilou
Abstract:
Benin is one of West Africa's major cotton-producing countries. Cotton is the country's main source of foreign currency and employment. But it is also one of the sources of soil degradation. The search for good agricultural practices is therefore, a constant preoccupation. The aim of this study is to measure the technical efficiency of cotton growers by comparing those who constantly grow cotton on the same land with those who practice crop rotation. The one-step estimation approach of the stochastic production frontier, including determinants of technical inefficiency, was applied to a stratified random sample of 261 cotton producers. Overall, the growers had a high average technical efficiency level of 90%. However, there was no significant difference in the level of technical efficiency between the two groups of growers studied. All the factors linked to compliance with the technical production itinerary had a positive influence on the growers' level of efficiency. It is, therefore, important to continue raising awareness of the importance of respecting the technical production itinerary and of integrated soil fertility management techniques.Keywords: technical efficiency, soil fertility, cotton, crop rotation, benin
Procedia PDF Downloads 7225376 Analyzing Tools and Techniques for Classification In Educational Data Mining: A Survey
Authors: D. I. George Amalarethinam, A. Emima
Abstract:
Educational Data Mining (EDM) is one of the newest topics to emerge in recent years, and it is concerned with developing methods for analyzing various types of data gathered from the educational circle. EDM methods and techniques with machine learning algorithms are used to extract meaningful and usable information from huge databases. For scientists and researchers, realistic applications of Machine Learning in the EDM sectors offer new frontiers and present new problems. One of the most important research areas in EDM is predicting student success. The prediction algorithms and techniques must be developed to forecast students' performance, which aids the tutor, institution to boost the level of student’s performance. This paper examines various classification techniques in prediction methods and data mining tools used in EDM.Keywords: classification technique, data mining, EDM methods, prediction methods
Procedia PDF Downloads 12025375 Physicochemical Studies and Screening of Aflatoxins and Pesticide Residues in Some 'Honey Pastes' Marketed in Jeddah, Saudi Arabia
Authors: Rashad Al-Hindi
Abstract:
The study aimed at investigating and screening of some contaminants in some honey-based products. Sixty-nine 'honey paste' samples marketed in Jeddah, Saudi Arabia, were subjected to physicochemical studies and screening of aflatoxins and pesticide residues. The physicochemical parameters studied were mainly: moisture content, total sugars, total ash, total nitrogen, fibres, total acidity as citric acid and pH. These parameters were investigated using standard methods of analysis. Mycotoxins (aflatoxins) and pesticide residues were by an enzyme-linked immunosorbent assay (ELISA) according to official methods. Results revealed that mean values of the examined criteria were: 15.44±0.36%; 74±4.30%; 0.40±0.062%; 0.22±0.05%; 6.93±1.30%; 2.53±0.161 mmol/kg; 4.10±0.158, respectively. Overall results proved that all tested honey pastes samples were free from mycotoxins (aflatoxins) and pesticide residues. Therefore, we conclude that 'honey pastes' marketed in Jeddah city, Saudi Arabia were safe for human consumption.Keywords: aflatoxins, honey mixtures, pesticide residues, physicochemical
Procedia PDF Downloads 18025374 Improving Security in Healthcare Applications Using Federated Learning System With Blockchain Technology
Authors: Aofan Liu, Qianqian Tan, Burra Venkata Durga Kumar
Abstract:
Data security is of the utmost importance in the healthcare area, as sensitive patient information is constantly sent around and analyzed by many different parties. The use of federated learning, which enables data to be evaluated locally on devices rather than being transferred to a central server, has emerged as a potential solution for protecting the privacy of user information. To protect against data breaches and unauthorized access, federated learning alone might not be adequate. In this context, the application of blockchain technology could provide the system extra protection. This study proposes a distributed federated learning system that is built on blockchain technology in order to enhance security in healthcare. This makes it possible for a wide variety of healthcare providers to work together on data analysis without raising concerns about the confidentiality of the data. The technical aspects of the system, including as the design and implementation of distributed learning algorithms, consensus mechanisms, and smart contracts, are also investigated as part of this process. The technique that was offered is a workable alternative that addresses concerns about the safety of healthcare while also fostering collaborative research and the interchange of data.Keywords: data privacy, distributed system, federated learning, machine learning
Procedia PDF Downloads 13825373 PolyScan: Comprehending Human Polymicrobial Infections for Vector-Borne Disease Diagnostic Purposes
Authors: Kunal Garg, Louise Theusen Hermansan, Kanoktip Puttaraska, Oliver Hendricks, Heidi Pirttinen, Leona Gilbert
Abstract:
The Germ Theory (one infectious determinant is equal to one disease) has unarguably evolved our capability to diagnose and treat infectious diseases over the years. Nevertheless, the advent of technology, climate change, and volatile human behavior has brought about drastic changes in our environment, leading us to question the relevance of the Germ Theory in our day, i.e. will vector-borne disease (VBD) sufferers produce multiple immune responses when tested for multiple microbes? Vector diseased patients producing multiple immune responses to different microbes would evidently suggest human polymicrobial infections (HPI). Ongoing diagnostic tools are exceedingly unequipped with the current research findings that would aid in diagnosing patients for polymicrobial infections. This shortcoming has caused misdiagnosis at very high rates, consequently diminishing the patient’s quality of life due to inadequate treatment. Equipped with the state-of-art scientific knowledge, PolyScan intends to address the pitfalls in current VBD diagnostics. PolyScan is a multiplex and multifunctional enzyme linked Immunosorbent assay (ELISA) platform that can test for numerous VBD microbes and allow simultaneous screening for multiple types of antibodies. To validate PolyScan, Lyme Borreliosis (LB) and spondyloarthritis (SpA) patient groups (n = 54 each) were tested for Borrelia burgdorferi, Borrelia burgdorferi Round Body (RB), Borrelia afzelii, Borrelia garinii, and Ehrlichia chaffeensis against IgM and IgG antibodies. LB serum samples were obtained from Germany and SpA serum samples were obtained from Denmark under relevant ethical approvals. The SpA group represented chronic LB stage because reactive arthritis (SpA subtype) in the form of Lyme arthritis links to LB. It was hypothesized that patients from both the groups will produce multiple immune responses that as a consequence would evidently suggest HPI. It was also hypothesized that the multiple immune response proportion in SpA patient group would be significantly larger when compared to the LB patient group across both antibodies. It was observed that 26% LB patients and 57% SpA patients produced multiple immune responses in contrast to 33% LB patients and 30% SpA patients that produced solitary immune responses when tested against IgM. Similarly, 52% LB patients and an astounding 73% SpA patients produced multiple immune responses in contrast to 30% LB patients and 8% SpA patients that produced solitary immune responses when tested against IgG. Interestingly, IgM immune dysfunction in both the patient groups was also recorded. Atypically, 6% of the unresponsive 18% LB with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Similarly, 12% of the unresponsive 19% SpA with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Thus, results not only supported hypothesis but also suggested that IgM may atypically prevail longer than IgG. The PolyScan concept will aid clinicians to detect patients for early, persistent, late, polymicrobial, & immune dysfunction conditions linked to different VBD. PolyScan provides a paradigm shift for the VBD diagnostic industry to follow that will drastically shorten patient’s time to receive adequate treatment.Keywords: diagnostics, immune dysfunction, polymicrobial, TICK-TAG
Procedia PDF Downloads 33625372 A Concept of Data Mining with XML Document
Authors: Akshay Agrawal, Anand K. Srivastava
Abstract:
The increasing amount of XML datasets available to casual users increases the necessity of investigating techniques to extract knowledge from these data. Data mining is widely applied in the database research area in order to extract frequent correlations of values from both structured and semi-structured datasets. The increasing availability of heterogeneous XML sources has raised a number of issues concerning how to represent and manage these semi structured data. In recent years due to the importance of managing these resources and extracting knowledge from them, lots of methods have been proposed in order to represent and cluster them in different ways.Keywords: XML, similarity measure, clustering, cluster quality, semantic clustering
Procedia PDF Downloads 38725371 A Gender-Based Assessment of Rural Livelihood Vulnerability: The Case of Ehiamenkyene in the Fanteakwa District of Eastern Ghana
Authors: Gideon Baffoe, Hirotaka Matsuda
Abstract:
Rural livelihood systems are known to be inherently vulnerable. Attempt to reduce vulnerability is linked to developing resilience to both internal and external shocks, thereby increasing the overall sustainability of livelihood systems. The shocks and stresses could be induced by natural processes such as the climate and/or by social dynamics such as institutional failure. In this wise, livelihood vulnerability is understood as a combined effect of biophysical, economic, and social processes. However, previous empirical studies on livelihood vulnerability in the context of rural areas across the globe have tended to focus more on climate-induced vulnerability assessment with few studies empirically partially considering the multiple dimensions of livelihood vulnerability. This has left a gap in our understanding of the subject. Using the Livelihood Vulnerability Index (LVI), this study aims to comprehensively assess the livelihood vulnerability level of rural households using Ehiamenkyene, a community in the forest zone of Eastern Ghana as a case study. Though the present study adopts the LVI approach, it differs from the original framework in two respects; (1) it introduces institutional influence into the framework and (2) it appreciates the gender differences in livelihood vulnerability. The study utilized empirical data collected from 110 households’ in the community. The overall study results show a high livelihood vulnerability situation in the community with male-headed households likely to be more vulnerable than their female counterparts. Out of the seven subcomponents assessed, only two (socio-demographic profile and livelihood strategies) recorded low vulnerability scores of less than 0.5 with the remaining five (health status, food security, water accessibility, institutional influence and natural disasters and climate variability) recording scores above 0.5, with institutional influence being the component with the highest impact score. The results suggest that to improve the livelihood conditions of the people; there is the need to prioritize issues related to the operations of both internal and external institutions, health status, food security, water and climate variability in the community.Keywords: assessment, gender, livelihood, rural, vulnerability
Procedia PDF Downloads 49325370 Speed-Up Data Transmission by Using Bluetooth Module on Gas Sensor Node of Arduino Board
Authors: Hiesik Kim, YongBeum Kim
Abstract:
Internet of Things (IoT) applications are widely serviced and spread worldwide. Local wireless data transmission technique must be developed to speed up with some technique. Bluetooth wireless data communication is wireless technique is technique made by Special Inter Group(SIG) using the frequency range 2.4 GHz, and it is exploiting Frequency Hopping to avoid collision with different device. To implement experiment, equipment for experiment transmitting measured data is made by using Arduino as Open source hardware, Gas sensor, and Bluetooth Module and algorithm controlling transmission speed is demonstrated. Experiment controlling transmission speed also is progressed by developing Android Application receiving measured data, and controlling this speed is available at the experiment result. it is important that in the future, improvement for communication algorithm be needed because few error occurs when data is transferred or received.Keywords: Arduino, Bluetooth, gas sensor, internet of things, transmission Speed
Procedia PDF Downloads 48525369 Evaluating the Total Costs of a Ransomware-Resilient Architecture for Healthcare Systems
Authors: Sreejith Gopinath, Aspen Olmsted
Abstract:
This paper is based on our previous work that proposed a risk-transference-based architecture for healthcare systems to store sensitive data outside the system boundary, rendering the system unattractive to would-be bad actors. This architecture also allows a compromised system to be abandoned and a new system instance spun up in place to ensure business continuity without paying a ransom or engaging with a bad actor. This paper delves into the details of various attacks we simulated against the prototype system. In the paper, we discuss at length the time and computational costs associated with storing and retrieving data in the prototype system, abandoning a compromised system, and setting up a new instance with existing data. Lastly, we simulate some analytical workloads over the data stored in our specialized data storage system and discuss the time and computational costs associated with running analytics over data in a specialized storage system outside the system boundary. In summary, this paper discusses the total costs of data storage, access, and analytics incurred with the proposed architecture.Keywords: cybersecurity, healthcare, ransomware, resilience, risk transference
Procedia PDF Downloads 13725368 Exploring the Capabilities of Sentinel-1A and Sentinel-2A Data for Landslide Mapping
Authors: Ismayanti Magfirah, Sartohadi Junun, Samodra Guruh
Abstract:
Landslides are one of the most frequent and devastating natural disasters in Indonesia. Many studies have been conducted regarding this phenomenon. However, there is a lack of attention in the landslide inventory mapping. The natural condition (dense forest area) and the limited human and economic resources are some of the major problems in building landslide inventory in Indonesia. Considering the importance of landslide inventory data in susceptibility, hazard, and risk analysis, it is essential to generate landslide inventory based on available resources. In order to achieve this, the first thing we have to do is identify the landslides' location. The presence of Sentinel-1A and Sentinel-2A data gives new insights into land monitoring investigation. The free access, high spatial resolution, and short revisit time, make the data become one of the most trending open sources data used in landslide mapping. Sentinel-1A and Sentinel-2A data have been used broadly for landslide detection and landuse/landcover mapping. This study aims to generate landslide map by integrating Sentinel-1A and Sentinel-2A data use change detection method. The result will be validated by field investigation to make preliminary landslide inventory in the study area.Keywords: change detection method, landslide inventory mapping, Sentinel-1A, Sentinel-2A
Procedia PDF Downloads 175