Search results for: Atomic data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25707

Search results for: Atomic data

24447 Algorithm Optimization to Sort in Parallel by Decreasing the Number of the Processors in SIMD (Single Instruction Multiple Data) Systems

Authors: Ali Hosseini

Abstract:

Paralleling is a mechanism to decrease the time necessary to execute the programs. Sorting is one of the important operations to be used in different systems in a way that the proper function of many algorithms and operations depend on sorted data. CRCW_SORT algorithm executes ‘N’ elements sorting in O(1) time on SIMD (Single Instruction Multiple Data) computers with n^2/2-n/2 number of processors. In this article having presented a mechanism by dividing the input string by the hinge element into two less strings the number of the processors to be used in sorting ‘N’ elements in O(1) time has decreased to n^2/8-n/4 in the best state; by this mechanism the best state is when the hinge element is the middle one and the worst state is when it is minimum. The findings from assessing the proposed algorithm by other methods on data collection and number of the processors indicate that the proposed algorithm uses less processors to sort during execution than other methods.

Keywords: CRCW, SIMD (Single Instruction Multiple Data) computers, parallel computers, number of the processors

Procedia PDF Downloads 310
24446 Increasing the System Availability of Data Centers by Using Virtualization Technologies

Authors: Chris Ewe, Naoum Jamous, Holger Schrödl

Abstract:

Like most entrepreneurs, data center operators pursue goals such as profit-maximization, improvement of the company’s reputation or basically to exist on the market. Part of those aims is to guarantee a given quality of service. Quality characteristics are specified in a contract called the service level agreement. Central part of this agreement is non-functional properties of an IT service. The system availability is one of the most important properties as it will be shown in this paper. To comply with availability requirements, data center operators can use virtualization technologies. A clear model to assess the effect of virtualization functions on the parts of a data center in relation to the system availability is still missing. This paper aims to introduce a basic model that shows these connections, and consider if the identified effects are positive or negative. Thus, this work also points out possible disadvantages of the technology. In consequence, the paper shows opportunities as well as risks of data center virtualization in relation to system availability.

Keywords: availability, cloud computing IT service, quality of service, service level agreement, virtualization

Procedia PDF Downloads 536
24445 Using Crowd-Sourced Data to Assess Safety in Developing Countries: The Case Study of Eastern Cairo, Egypt

Authors: Mahmoud Ahmed Farrag, Ali Zain Elabdeen Heikal, Mohamed Shawky Ahmed, Ahmed Osama Amer

Abstract:

Crowd-sourced data refers to data that is collected and shared by a large number of individuals or organizations, often through the use of digital technologies such as mobile devices and social media. The shortage in crash data collection in developing countries makes it difficult to fully understand and address road safety issues in these regions. In developing countries, crowd-sourced data can be a valuable tool for improving road safety, particularly in urban areas where the majority of road crashes occur. This study is -to our best knowledge- the first to develop safety performance functions using crowd-sourced data by adopting a negative binomial structure model and the Full Bayes model to investigate traffic safety for urban road networks and provide insights into the impact of roadway characteristics. Furthermore, as a part of the safety management process, network screening has been undergone through applying two different methods to rank the most hazardous road segments: PCR method (adopted in the Highway Capacity Manual HCM) as well as a graphical method using GIS tools to compare and validate. Lastly, recommendations were suggested for policymakers to ensure safer roads.

Keywords: crowdsourced data, road crashes, safety performance functions, Full Bayes models, network screening

Procedia PDF Downloads 52
24444 Review of Different Machine Learning Algorithms

Authors: Syed Romat Ali Shah, Bilal Shoaib, Saleem Akhtar, Munib Ahmad, Shahan Sadiqui

Abstract:

Classification is a data mining technique, which is recognizedon Machine Learning (ML) algorithm. It is used to classifythe individual articlein a knownofinformation into a set of predefinemodules or group. Web mining is also a portion of that sympathetic of data mining methods. The main purpose of this paper to analysis and compare the performance of Naïve Bayse Algorithm, Decision Tree, K-Nearest Neighbor (KNN), Artificial Neural Network (ANN)and Support Vector Machine (SVM). This paper consists of different ML algorithm and their advantages and disadvantages and also define research issues.

Keywords: Data Mining, Web Mining, classification, ML Algorithms

Procedia PDF Downloads 303
24443 Using Genetic Algorithms and Rough Set Based Fuzzy K-Modes to Improve Centroid Model Clustering Performance on Categorical Data

Authors: Rishabh Srivastav, Divyam Sharma

Abstract:

We propose an algorithm to cluster categorical data named as ‘Genetic algorithm initialized rough set based fuzzy K-Modes for categorical data’. We propose an amalgamation of the simple K-modes algorithm, the Rough and Fuzzy set based K-modes and the Genetic Algorithm to form a new algorithm,which we hypothesise, will provide better Centroid Model clustering results, than existing standard algorithms. In the proposed algorithm, the initialization and updation of modes is done by the use of genetic algorithms while the membership values are calculated using the rough set and fuzzy logic.

Keywords: categorical data, fuzzy logic, genetic algorithm, K modes clustering, rough sets

Procedia PDF Downloads 246
24442 Forecasting Amman Stock Market Data Using a Hybrid Method

Authors: Ahmad Awajan, Sadam Al Wadi

Abstract:

In this study, a hybrid method based on Empirical Mode Decomposition and Holt-Winter (EMD-HW) is used to forecast Amman stock market data. First, the data are decomposed by EMD method into Intrinsic Mode Functions (IMFs) and residual components. Then, all components are forecasted by HW technique. Finally, forecasting values are aggregated together to get the forecasting value of stock market data. Empirical results showed that the EMD- HW outperform individual forecasting models. The strength of this EMD-HW lies in its ability to forecast non-stationary and non- linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy comparing with eight existing forecasting methods based on the five forecast error measures.

Keywords: Holt-Winter method, empirical mode decomposition, forecasting, time series

Procedia PDF Downloads 129
24441 Open Reading Frame Marker-Based Capacitive DNA Sensor for Ultrasensitive Detection of Escherichia coli O157:H7 in Potable Water

Authors: Rehan Deshmukh, Sunil Bhand, Utpal Roy

Abstract:

We report the label-free electrochemical detection of Escherichia coli O157:H7 (ATCC 43895) in potable water using a DNA probe as a sensing molecule targeting the open reading frame marker. Indium tin oxide (ITO) surface was modified with organosilane and, glutaraldehyde was applied as a linker to fabricate the DNA sensor chip. Non-Faradic electrochemical impedance spectroscopy (EIS) behavior was investigated at each step of sensor fabrication using cyclic voltammetry, impedance, phase, relative permittivity, capacitance, and admittance. Atomic force microscopy (AFM) and scanning electron microscopy (SEM) revealed significant changes in surface topographies of DNA sensor chip fabrication. The decrease in the percentage of pinholes from 2.05 (Bare ITO) to 1.46 (after DNA hybridization) suggested the capacitive behavior of the DNA sensor chip. The results of non-Faradic EIS studies of DNA sensor chip showed a systematic declining trend of the capacitance as well as the relative permittivity upon DNA hybridization. DNA sensor chip exhibited linearity in 0.5 to 25 pg/10mL for E. coli O157:H7 (ATCC 43895). The limit of detection (LOD) at 95% confidence estimated by logistic regression was 0.1 pg DNA/10mL of E. coli O157:H7 (equivalent to 13.67 CFU/10mL) with a p-value of 0.0237. Moreover, the fabricated DNA sensor chip used for detection of E. coli O157:H7 showed no significant cross-reactivity with closely and distantly related bacteria such as Escherichia coli MTCC 3221, Escherichia coli O78:H11 MTCC 723 and Bacillus subtilis MTCC 736. Consequently, the results obtained in our study demonstrated the possible application of developed DNA sensor chips for E. coli O157:H7 ATCC 43895 in real water samples as well.

Keywords: capacitance, DNA sensor, Escherichia coli O157:H7, open reading frame marker

Procedia PDF Downloads 144
24440 Building Information Modeling-Based Information Exchange to Support Facilities Management Systems

Authors: Sandra T. Matarneh, Mark Danso-Amoako, Salam Al-Bizri, Mark Gaterell

Abstract:

Today’s facilities are ever more sophisticated and the need for available and reliable information for operation and maintenance activities is vital. The key challenge for facilities managers is to have real-time accurate and complete information to perform their day-to-day activities and to provide their senior management with accurate information for decision-making process. Currently, there are various technology platforms, data repositories, or database systems such as Computer-Aided Facility Management (CAFM) that are used for these purposes in different facilities. In most current practices, the data is extracted from paper construction documents and is re-entered manually in one of these computerized information systems. Construction Operations Building information exchange (COBie), is a non-proprietary data format that contains the asset non-geometric data which was captured and collected during the design and construction phases for owners and facility managers use. Recently software vendors developed add-in applications to generate COBie spreadsheet automatically. However, most of these add-in applications are capable of generating a limited amount of COBie data, in which considerable time is still required to enter the remaining data manually to complete the COBie spreadsheet. Some of the data which cannot be generated by these COBie add-ins is essential for facilities manager’s day-to-day activities such as job sheet which includes preventive maintenance schedules. To facilitate a seamless data transfer between BIM models and facilities management systems, we developed a framework that enables automated data generation using the data extracted directly from BIM models to external web database, and then enabling different stakeholders to access to the external web database to enter the required asset data directly to generate a rich COBie spreadsheet that contains most of the required asset data for efficient facilities management operations. The proposed framework is a part of ongoing research and will be demonstrated and validated on a typical university building. Moreover, the proposed framework supplements the existing body of knowledge in facilities management domain by providing a novel framework that facilitates seamless data transfer between BIM models and facilities management systems.

Keywords: building information modeling, BIM, facilities management systems, interoperability, information management

Procedia PDF Downloads 115
24439 Monsoon Controlled Mercury Transportation in Ganga Alluvial Plain, Northern India and Its Implication on Global Mercury Cycle

Authors: Anjali Singh, Ashwani Raju, Vandana Devi, Mohmad Mohsin Atique, Satyendra Singh, Munendra Singh

Abstract:

India is the biggest consumer of mercury and, consequently, a major emitter too. The increasing mercury contamination in India’s water resources has gained widespread attention and, therefore, atmospheric deposition is of critical concern. However, little emphasis was placed on the role of precipitation in the aquatic mercury cycle of the Ganga Alluvial Plain which provides drinking water to nearly 7% of the world’s human population. A majority of the precipitation here occurs primarily in 10% duration of the year in the monsoon season. To evaluate the sources and transportation of mercury, water sample analysis has been conducted from two selected sites near Lucknow, which have a strong hydraulic gradient towards the river. 31 groundwater samples from Jehta village (26°55’15’’N; 80°50’21’’E; 119 m above mean sea level) and 31 river water samples from the Behta Nadi (a tributary of the Gomati River draining into the Ganga River) were collected during the monsoon season on every alternate day between 01 July to 30 August 2019. The total mercury analysis was performed by using Flow Injection Atomic Absorption Spectroscopy (AAS)-Mercury Hybride System, and daily rainfall data was collected from the India Meteorological Department, Amausi, Lucknow. The ambient groundwater and river-water concentrations were both 2-4 ng/L as there is no known geogenic source of mercury found in the area. Before the onset of the monsoon season, the groundwater and the river-water recorded mercury concentrations two orders of magnitude higher than the ambient concentrations, indicating the regional transportation of the mercury from the non-point source into the aquatic environment. Maximum mercury concentrations in groundwater and river-water were three orders of magnitude higher than the ambient concentrations after the onset of the monsoon season characterizing the considerable mobilization and redistribution of mercury by monsoonal precipitation. About 50% of both of the water samples were reported mercury below the detection limit, which can be mostly linked to the low intensity of precipitation in August and also with the dilution factor by precipitation. The highest concentration ( > 1200 ng/L) of mercury in groundwater was reported after 6-days lag from the first precipitation peak. Two high concentration peaks (>1000 ng/L) in river-water were separately correlated with the surface flow and groundwater outflow of mercury. We attribute the elevated mercury concentration in both of the water samples before the precipitation event to mercury originating from the extensive use of agrochemicals in mango farming in the plain. However, the elevated mercury concentration during the onset of monsoon appears to increase in area wetted with atmospherically deposited mercury, which migrated down from surface water to groundwater as downslope migration is a fundamental mechanism seen in rivers of the alluvial plain. The present study underscores the significance of monsoonal precipitation in the transportation of mercury to drinking water resources of the Ganga Alluvial Plain. This study also suggests that future research must be pursued for a better understand of the human health impact of mercury contamination and for quantification of the role of Ganga Alluvial Plain in the Global Mercury Cycle.

Keywords: drinking water resources, Ganga alluvial plain, india, mercury

Procedia PDF Downloads 145
24438 Optimal Sputtering Conditions for Nickel-Cermet Anodes in Intermediate Temperature Solid Oxide Fuel Cells

Authors: Waqas Hassan Tanveer, Yoon Ho Lee, Taehyun Park, Wonjong Yu, Yaegeun Lee, Yusung Kim, Suk Won Cha

Abstract:

Nickel-Gadolinium Doped Ceria (Ni-GDC) cermet anodic thin films were prepared on Scandia Stabilized Zirconia (ScSZ) electrolyte supports by radio frequency (RF) sputtering, with a range of different sputtering powers (50 – 200W) and background Ar gas pressures (30 – 90mTorr). The effects of varying sputtering power and pressure on the properties of Ni-GDC films were studied using Focused Ion Beam (FIB), X-ray Photoelectron Spectroscopy (XPS), X-ray Diffraction (XRD), Energy Dispersive X-ray (EDX), and Atomic Force Microscopy (AFM) techniques. The Ni content was found to be always higher than the Ce content, at all sputtering conditions. This increased Ni content was attributed to significantly higher energy transfer efficiency of Ni ions as compared to Ce ions with Ar background sputtering gas. The solid oxide fuel cell configuration was completed by using lanthanum strontium manganite (LSM/YSZ) cathodes on the other side of ScSZ supports. Performance comparison of cells was done by Voltage-Current-Power (VIP) curves, while the resistances of various cell components were observed by nyquist plots. Initial results showed that anode films made by higher powered RF sputtering performed better than lower powered ones for a specific Ar pressure. Interestingly, however, anodes made at highest power and pressure, were not the ones that showed the maximum power output at an intermediate solid oxide fuel cell temperature of 800°C. Finally, an optimal sputtering condition was reported for high performance Ni-GDC anodes.

Keywords: intermediate temperature solid oxide fuel cells, nickel-cermet anodic thin films, nyquist plots, radio frequency sputtering

Procedia PDF Downloads 240
24437 Synthesis of Highly Stable Pseudocapacitors From Secondary Resources

Authors: Samane Maroufi, Rasoul Khayyam Nekouei, Sajjad Mofarah

Abstract:

Fabrication of the state-of-the-art portable pseudocapacitors with the desired transparency, mechanical flexibility, capacitance, and durability is challenging. In most cases, the fabrication of such devices requires critical elements which are either under the crisis of depletion or their extraction from virgin mineral ores have sever environmental impacts. This urges the use of secondary resources instead of virgin resources in fabrication of advanced devices. In this research, ultrathin films of defect-rich Mn1−x−y(CexLay)O2−δ with controllable thicknesses in the range between 5 nm to 627 nm and transmittance (≈29–100%) have been fabricated via an electrochemical chronoamperometric deposition technique using an aqueous precursor derived during the selective purification of rare earth oxide (REOs) isolated from end-of-life nickel-metal hydride (Ni-MH) batteries. Intercalation/de-intercalation of anionic O2− through the atomic tunnels of the stratified Mn1−x−y(CexLay)O2−δ crystallites was found to be responsible for outstanding areal capacitance of 3.4 mF cm−2 of films with 86% transmittance. The intervalence charge transfer among interstitial Ce/La cations and Mn oxidation states within the Mn1−x−y(CexLay)O2−δ structure resulted in excellent capacitance retention of ≈90% after 16 000 cycles. The synthesised transparent flexible Mn1−x−y(CexLay)O2−δ full-cell pseudocapacitor device possessed the energy and power densities of 0.088 μWh cm⁻² and 843 µW cm⁻², respectively. These values show insignificant changes under vigorous twisting and bending to 45–180° confirming these value-added materials are intriguing alternatives for size-sensitive energy storage devices. This research confirms the feasibility of utilisation of secondary waste resources for the fabrication of high-quality pseudocapacitors with engineered defects with the desired flexibility, transparency, and cycling stability suitable for size-sensitive portable electronic devices.

Keywords: pseudocapacitors, energy storage devices, flexible and transparent, sustainability

Procedia PDF Downloads 87
24436 Data Security and Privacy Challenges in Cloud Computing

Authors: Amir Rashid

Abstract:

Cloud Computing frameworks empower organizations to cut expenses by outsourcing computation resources on-request. As of now, customers of Cloud service providers have no methods for confirming the privacy and ownership of their information and data. To address this issue we propose the platform of a trusted cloud computing program (TCCP). TCCP empowers Infrastructure as a Service (IaaS) suppliers, for example, Amazon EC2 to give a shout box execution condition that ensures secret execution of visitor virtual machines. Also, it permits clients to bear witness to the IaaS supplier and decide if the administration is secure before they dispatch their virtual machines. This paper proposes a Trusted Cloud Computing Platform (TCCP) for guaranteeing the privacy and trustworthiness of computed data that are outsourced to IaaS service providers. The TCCP gives the deliberation of a shut box execution condition for a client's VM, ensuring that no cloud supplier's authorized manager can examine or mess up with its data. Furthermore, before launching the VM, the TCCP permits a client to dependably and remotely acknowledge that the provider at backend is running a confided in TCCP. This capacity extends the verification of whole administration, and hence permits a client to confirm the data operation in secure mode.

Keywords: cloud security, IaaS, cloud data privacy and integrity, hybrid cloud

Procedia PDF Downloads 299
24435 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record

Authors: Raghavi C. Janaswamy

Abstract:

In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.

Keywords: electronic health record, graph neural network, heterogeneous data, prediction

Procedia PDF Downloads 86
24434 A Proposal to Tackle Security Challenges of Distributed Systems in the Healthcare Sector

Authors: Ang Chia Hong, Julian Khoo Xubin, Burra Venkata Durga Kumar

Abstract:

Distributed systems offer many benefits to the healthcare industry. From big data analysis to business intelligence, the increased computational power and efficiency from distributed systems serve as an invaluable resource in the healthcare sector to utilize. However, as the usage of these distributed systems increases, many issues arise. The main focus of this paper will be on security issues. Many security issues stem from distributed systems in the healthcare industry, particularly information security. The data of people is especially sensitive in the healthcare industry. If important information gets leaked (Eg. IC, credit card number, address, etc.), a person’s identity, financial status, and safety might get compromised. This results in the responsible organization losing a lot of money in compensating these people and even more resources expended trying to fix the fault. Therefore, a framework for a blockchain-based healthcare data management system for healthcare was proposed. In this framework, the usage of a blockchain network is explored to store the encryption key of the patient’s data. As for the actual data, it is encrypted and its encrypted data, called ciphertext, is stored in a cloud storage platform. Furthermore, there are some issues that have to be emphasized and tackled for future improvements, such as a multi-user scheme that could be proposed, authentication issues that have to be tackled or migrating the backend processes into the blockchain network. Due to the nature of blockchain technology, the data will be tamper-proof, and its read-only function can only be accessed by authorized users such as doctors and nurses. This guarantees the confidentiality and immutability of the patient’s data.

Keywords: distributed, healthcare, efficiency, security, blockchain, confidentiality and immutability

Procedia PDF Downloads 184
24433 Thermal Ageing of a 316 Nb Stainless Steel: From Mechanical and Microstructural Analyses to Thermal Ageing Models for Long Time Prediction

Authors: Julien Monnier, Isabelle Mouton, Francois Buy, Adrien Michel, Sylvain Ringeval, Joel Malaplate, Caroline Toffolon, Bernard Marini, Audrey Lechartier

Abstract:

Chosen to design and assemble massive components for nuclear industry, the 316 Nb austenitic stainless steel (also called 316 Nb) suits well this function thanks to its mechanical, heat and corrosion handling properties. However, these properties might change during steel’s life due to thermal ageing causing changes within its microstructure. Our main purpose is to determine if the 316 Nb will keep its mechanical properties after an exposition to industrial temperatures (around 300 °C) during a long period of time (< 10 years). The 316 Nb is composed by different phases, which are austenite as main phase, niobium-carbides, and ferrite remaining from the ferrite to austenite transformation during the process. Our purpose is to understand thermal ageing effects on the material microstructure and properties and to submit a model predicting the evolution of 316 Nb properties as a function of temperature and time. To do so, based on Fe-Cr and 316 Nb phase diagrams, we studied the thermal ageing of 316 Nb steel alloys (1%v of ferrite) and welds (10%v of ferrite) for various temperatures (350, 400, and 450 °C) and ageing time (from 1 to 10.000 hours). Higher temperatures have been chosen to reduce thermal treatment time by exploiting a kinetic effect of temperature on 316 Nb ageing without modifying reaction mechanisms. Our results from early times of ageing show no effect on steel’s global properties linked to austenite stability, but an increase of ferrite hardness during thermal ageing has been observed. It has been shown that austenite’s crystalline structure (cfc) grants it a thermal stability, however, ferrite crystalline structure (bcc) favours iron-chromium demixion and formation of iron-rich and chromium-rich phases within ferrite. Observations of thermal ageing effects on ferrite’s microstructure were necessary to understand the changes caused by the thermal treatment. Analyses have been performed by using different techniques like Atomic Probe Tomography (APT) and Differential Scanning Calorimetry (DSC). A demixion of alloy’s elements leading to formation of iron-rich (α phase, bcc structure), chromium-rich (α’ phase, bcc structure), and nickel-rich (fcc structure) phases within the ferrite have been observed and associated to the increase of ferrite’s hardness. APT results grant information about phases’ volume fraction and composition, allowing to associate hardness measurements to the volume fractions of the different phases and to set up a way to calculate α’ and nickel-rich particles’ growth rate depending on temperature. The same methodology has been applied to DSC results, which allowed us to measure the enthalpy of α’ phase dissolution between 500 and 600_°C. To resume, we started from mechanical and macroscopic measurements and explained the results through microstructural study. The data obtained has been match to CALPHAD models’ prediction and used to improve these calculations and employ them to predict 316 Nb properties’ change during the industrial process.

Keywords: stainless steel characterization, atom probe tomography APT, vickers hardness, differential scanning calorimetry DSC, thermal ageing

Procedia PDF Downloads 93
24432 Design and Implementation of a Geodatabase and WebGIS

Authors: Sajid Ali, Dietrich Schröder

Abstract:

The merging of internet and Web has created many disciplines and Web GIS is one these disciplines which is effectively dealing with the geospatial data in a proficient way. Web GIS technologies have provided an easy accessing and sharing of geospatial data over the internet. However, there is a single platform for easy and multiple accesses of the data lacks for the European Caribbean Association (Europaische Karibische Gesselschaft - EKG) to assist their members and other research community. The technique presented in this paper deals with designing of a geodatabase using PostgreSQL/PostGIS as an object oriented relational database management system (ORDBMS) for competent dissemination and management of spatial data and Web GIS by using OpenGeo Suite for the fast sharing and distribution of the data over the internet. The characteristics of the required design for the geodatabase have been studied and a specific methodology is given for the purpose of designing the Web GIS. At the end, validation of this Web based geodatabase has been performed over two Desktop GIS software and a web map application and it is also discussed that the contribution has all the desired modules to expedite further research in the area as per the requirements.

Keywords: desktop GISSoftware, European Caribbean association, geodatabase, OpenGeo suite, postgreSQL/PostGIS, webGIS, web map application

Procedia PDF Downloads 340
24431 Integration of “FAIR” Data Principles in Longitudinal Mental Health Research in Africa: Lessons from a Landscape Analysis

Authors: Bylhah Mugotitsa, Jim Todd, Agnes Kiragga, Jay Greenfield, Evans Omondi, Lukoye Atwoli, Reinpeter Momanyi

Abstract:

The INSPIRE network aims to build an open, ethical, sustainable, and FAIR (Findable, Accessible, Interoperable, Reusable) data science platform, particularly for longitudinal mental health (MH) data. While studies have been done at the clinical and population level, there still exists limitations in data and research in LMICs, which pose a risk of underrepresentation of mental disorders. It is vital to examine the existing longitudinal MH data, focusing on how FAIR datasets are. This landscape analysis aimed to provide both overall level of evidence of availability of longitudinal datasets and degree of consistency in longitudinal studies conducted. Utilizing prompters proved instrumental in streamlining the analysis process, facilitating access, crafting code snippets, categorization, and analysis of extensive data repositories related to depression, anxiety, and psychosis in Africa. While leveraging artificial intelligence (AI), we filtered through over 18,000 scientific papers spanning from 1970 to 2023. This AI-driven approach enabled the identification of 228 longitudinal research papers meeting inclusion criteria. Quality assurance revealed 10% incorrectly identified articles and 2 duplicates, underscoring the prevalence of longitudinal MH research in South Africa, focusing on depression. From the analysis, evaluating data and metadata adherence to FAIR principles remains crucial for enhancing accessibility and quality of MH research in Africa. While AI has the potential to enhance research processes, challenges such as privacy concerns and data security risks must be addressed. Ethical and equity considerations in data sharing and reuse are also vital. There’s need for collaborative efforts across disciplinary and national boundaries to improve the Findability and Accessibility of data. Current efforts should also focus on creating integrated data resources and tools to improve Interoperability and Reusability of MH data. Practical steps for researchers include careful study planning, data preservation, machine-actionable metadata, and promoting data reuse to advance science and improve equity. Metrics and recognition should be established to incentivize adherence to FAIR principles in MH research

Keywords: longitudinal mental health research, data sharing, fair data principles, Africa, landscape analysis

Procedia PDF Downloads 89
24430 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 67
24429 Human-Centred Data Analysis Method for Future Design of Residential Spaces: Coliving Case Study

Authors: Alicia Regodon Puyalto, Alfonso Garcia-Santos

Abstract:

This article presents a method to analyze the use of indoor spaces based on data analytics obtained from inbuilt digital devices. The study uses the data generated by the in-place devices, such as smart locks, Wi-Fi routers, and electrical sensors, to gain additional insights on space occupancy, user behaviour, and comfort. Those devices, originally installed to facilitate remote operations, report data through the internet that the research uses to analyze information on human real-time use of spaces. Using an in-place Internet of Things (IoT) network enables a faster, more affordable, seamless, and scalable solution to analyze building interior spaces without incorporating external data collection systems such as sensors. The methodology is applied to a real case study of coliving, a residential building of 3000m², 7 floors, and 80 users in the centre of Madrid. The case study applies the method to classify IoT devices, assess, clean, and analyze collected data based on the analysis framework. The information is collected remotely, through the different platforms devices' platforms; the first step is to curate the data, understand what insights can be provided from each device according to the objectives of the study, this generates an analysis framework to be escalated for future building assessment even beyond the residential sector. The method will adjust the parameters to be analyzed tailored to the dataset available in the IoT of each building. The research demonstrates how human-centered data analytics can improve the future spatial design of indoor spaces.

Keywords: in-place devices, IoT, human-centred data-analytics, spatial design

Procedia PDF Downloads 197
24428 A Unique Multi-Class Support Vector Machine Algorithm Using MapReduce

Authors: Aditi Viswanathan, Shree Ranjani, Aruna Govada

Abstract:

With data sizes constantly expanding, and with classical machine learning algorithms that analyze such data requiring larger and larger amounts of computation time and storage space, the need to distribute computation and memory requirements among several computers has become apparent. Although substantial work has been done in developing distributed binary SVM algorithms and multi-class SVM algorithms individually, the field of multi-class distributed SVMs remains largely unexplored. This research seeks to develop an algorithm that implements the Support Vector Machine over a multi-class data set and is efficient in a distributed environment. For this, we recursively choose the best binary split of a set of classes using a greedy technique. Much like the divide and conquer approach. Our algorithm has shown better computation time during the testing phase than the traditional sequential SVM methods (One vs. One, One vs. Rest) and out-performs them as the size of the data set grows. This approach also classifies the data with higher accuracy than the traditional multi-class algorithms.

Keywords: distributed algorithm, MapReduce, multi-class, support vector machine

Procedia PDF Downloads 401
24427 Information Management Approach in the Prediction of Acute Appendicitis

Authors: Ahmad Shahin, Walid Moudani, Ali Bekraki

Abstract:

This research aims at presenting a predictive data mining model to handle an accurate diagnosis of acute appendicitis with patients for the purpose of maximizing the health service quality, minimizing morbidity/mortality, and reducing cost. However, acute appendicitis is the most common disease which requires timely accurate diagnosis and needs surgical intervention. Although the treatment of acute appendicitis is simple and straightforward, its diagnosis is still difficult because no single sign, symptom, laboratory or image examination accurately confirms the diagnosis of acute appendicitis in all cases. This contributes in increasing morbidity and negative appendectomy. In this study, the authors propose to generate an accurate model in prediction of patients with acute appendicitis which is based, firstly, on the segmentation technique associated to ABC algorithm to segment the patients; secondly, on applying fuzzy logic to process the massive volume of heterogeneous and noisy data (age, sex, fever, white blood cell, neutrophilia, CRP, urine, ultrasound, CT, appendectomy, etc.) in order to express knowledge and analyze the relationships among data in a comprehensive manner; and thirdly, on applying dynamic programming technique to reduce the number of data attributes. The proposed model is evaluated based on a set of benchmark techniques and even on a set of benchmark classification problems of osteoporosis, diabetes and heart obtained from the UCI data and other data sources.

Keywords: healthcare management, acute appendicitis, data mining, classification, decision tree

Procedia PDF Downloads 350
24426 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: multi-objective, analysis, data flow, freight delivery, methodology

Procedia PDF Downloads 180
24425 Dynamic Thin Film Morphology near the Contact Line of a Condensing Droplet: Nanoscale Resolution

Authors: Abbasali Abouei Mehrizi, Hao Wang

Abstract:

The thin film region is so important in heat transfer process due to its low thermal resistance. On the other hand, the dynamic contact angle is crucial boundary condition in numerical simulations. While different modeling contains different assumption of the microscopic contact angle, none of them has experimental evidence for their assumption, and the contact line movement mechanism still remains vague. The experimental investigation in complete wetting is more popular than partial wetting, especially in nanoscale resolution when there is sharp variation in thin film profile in partial wetting. In the present study, an experimental investigation of water film morphology near the triple phase contact line during the condensation is performed. The state-of-the-art tapping-mode atomic force microscopy (TM-AFM) was used to get the high-resolution film profile goes down to 2 nm from the contact line. The droplet was put in saturated chamber. The pristine silicon wafer was used as a smooth substrate. The substrate was heated by PI film heater. So the chamber would be over saturated by droplet evaporation. By turning off the heater, water vapor gradually started condensing on the droplet and the droplet advanced. The advancing speed was less than 20 nm/s. The dominant results indicate that in contrast to nonvolatile liquid, the film profile goes down straightly to the surface till 2 nm from the substrate. However, small bending has been observed below 20 nm, occasionally. So, it can be claimed that for the low condensation rate the microscopic contact angle equals to the optically detectable macroscopic contact angle. This result can be used to simplify the heat transfer modeling in partial wetting. The experimental result of the equality of microscopic and macroscopic contact angle can be used as a solid evidence for using this boundary condition in numerical simulation.

Keywords: advancing, condensation, microscopic contact angle, partial wetting

Procedia PDF Downloads 295
24424 Design of Low-Cost Water Purification System Using Activated Carbon

Authors: Nayan Kishore Giri, Ramakar Jha

Abstract:

Water is a major element for the life of all the mankind in the earth. India’s surface water flows through fourteen major streams. Indian rivers are the main source of potable water in India. In the eastern part of India many toxic hazardous metals discharged into the river from mining industries, which leads many deadly diseases to human being. So the potable water quality is very significant and vital concern at present as it is related with the present and future health perspective of the human race. Consciousness of health risks linked with unsafe water is still very low among the many rural and urban areas in India. Only about 7% of total Indian people using water purifier. This unhealthy situation of water is not only present in India but also present in many underdeveloped countries. The major reason behind this is the high cost of water purifier. This current study geared towards development of economical and efficient technology for the removal of maximum possible toxic metals and pathogen bacteria. The work involves the design of portable purification system and purifying material. In this design Coconut shell granular activated carbon(GAC) and polypropylene filter cloths were used in this system. The activated carbon is impregnated with Iron(Fe). Iron is used because it enhances the adsorption capacity of activated carbon. The thorough analysis of iron impregnated activated carbon(Fe-AC) is done by Scanning Electron Microscope (SEM), X-ray diffraction (XRD) , BET surface area test were done. Then 10 ppm of each toxic metal were infiltrated through the designed purification system and they were analysed in Atomic absorption spectrum (AAS). The results are very promising and it is low cost. This work will help many people who are in need of potable water. They can be benefited for its affordability. It could be helpful in industries and other domestic usage.

Keywords: potable water, coconut shell GAC, polypropylene filter cloths, SEM, XRD, BET, AAS

Procedia PDF Downloads 379
24423 Minimization of Denial of Services Attacks in Vehicular Adhoc Networking by Applying Different Constraints

Authors: Amjad Khan

Abstract:

The security of Vehicular ad hoc networking is of great importance as it involves serious life threats. Thus to provide secure communication amongst Vehicles on road, the conventional security system is not enough. It is necessary to prevent the network resources from wastage and give them protection against malicious nodes so that to ensure the data bandwidth availability to the legitimate nodes of the network. This work is related to provide a non conventional security system by introducing some constraints to minimize the DoS (Denial of services) especially data and bandwidth. The data packets received by a node in the network will pass through a number of tests and if any of the test fails, the node will drop those data packets and will not forward it anymore. Also if a node claims to be the nearest node for forwarding emergency messages then the sender can effectively identify the true or false status of the claim by using these constraints. Consequently the DoS(Denial of Services) attack is minimized by the instant availability of data without wasting the network resources.

Keywords: black hole attack, grey hole attack, intransient traffic tempering, networking

Procedia PDF Downloads 284
24422 Traffic Prediction with Raw Data Utilization and Context Building

Authors: Zhou Yang, Heli Sun, Jianbin Huang, Jizhong Zhao, Shaojie Qiao

Abstract:

Traffic prediction is essential in a multitude of ways in modern urban life. The researchers of earlier work in this domain carry out the investigation chiefly with two major focuses: (1) the accurate forecast of future values in multiple time series and (2) knowledge extraction from spatial-temporal correlations. However, two key considerations for traffic prediction are often missed: the completeness of raw data and the full context of the prediction timestamp. Concentrating on the two drawbacks of earlier work, we devise an approach that can address these issues in a two-phase framework. First, we utilize the raw trajectories to a greater extent through building a VLA table and data compression. We obtain the intra-trajectory features with graph-based encoding and the intertrajectory ones with a grid-based model and the technique of back projection that restore their surrounding high-resolution spatial-temporal environment. To the best of our knowledge, we are the first to study direct feature extraction from raw trajectories for traffic prediction and attempt the use of raw data with the least degree of reduction. In the prediction phase, we provide a broader context for the prediction timestamp by taking into account the information that are around it in the training dataset. Extensive experiments on several well-known datasets have verified the effectiveness of our solution that combines the strength of raw trajectory data and prediction context. In terms of performance, our approach surpasses several state-of-the-art methods for traffic prediction.

Keywords: traffic prediction, raw data utilization, context building, data reduction

Procedia PDF Downloads 127
24421 Seismic Interpretation and Petrophysical Evaluation of SM Field, Libya

Authors: Abdalla Abdelnabi, Yousf Abushalah

Abstract:

The G Formation is a major gas producing reservoir in the SM Field, eastern, Libya. It is called G limestone because it consists of shallow marine limestone. Well data and 3D-Seismic in conjunction with the results of a previous study were used to delineate the hydrocarbon reservoir of Middle Eocene G-Formation of SM Field area. The data include three-dimensional seismic data acquired in 2009. It covers approximately an area of 75 mi² and with more than 9 wells penetrating the reservoir. Seismic data are used to identify any stratigraphic and structural and features such as channels and faults and which may play a significant role in hydrocarbon traps. The well data are used to calculation petrophysical analysis of S field. The average porosity of the Middle Eocene G Formation is very good with porosity reaching 24% especially around well W 6. Average water saturation was calculated for each well from porosity and resistivity logs using Archie’s formula. The average water saturation for the whole well is 25%. Structural mapping of top and bottom of Middle Eocene G formation revealed the highest area in the SM field is at 4800 ft subsea around wells W4, W5, W6, and W7 and the deepest point is at 4950 ft subsea. Correlation between wells using well data and structural maps created from seismic data revealed that net thickness of G Formation range from 0 ft in the north part of the field to 235 ft in southwest and south part of the field. The gas water contact is found at 4860 ft using the resistivity log. The net isopach map using both the trapezoidal and pyramid rules are used to calculate the total bulk volume. The original gas in place and the recoverable gas were calculated volumetrically to be 890 Billion Standard Cubic Feet (BSCF) and 630 (BSCF) respectively.

Keywords: 3D seismic data, well logging, petrel, kingdom suite

Procedia PDF Downloads 150
24420 Analysis of Spatial and Temporal Data Using Remote Sensing Technology

Authors: Kapil Pandey, Vishnu Goyal

Abstract:

Spatial and temporal data analysis is very well known in the field of satellite image processing. When spatial data are correlated with time, series analysis it gives the significant results in change detection studies. In this paper the GIS and Remote sensing techniques has been used to find the change detection using time series satellite imagery of Uttarakhand state during the years of 1990-2010. Natural vegetation, urban area, forest cover etc. were chosen as main landuse classes to study. Landuse/ landcover classes within several years were prepared using satellite images. Maximum likelihood supervised classification technique was adopted in this work and finally landuse change index has been generated and graphical models were used to present the changes.

Keywords: GIS, landuse/landcover, spatial and temporal data, remote sensing

Procedia PDF Downloads 433
24419 Development of Hybrid Materials Combining Biomass as Fique Fibers with Metal-Organic Frameworks, and Their Potential as Mercury Adsorbents

Authors: Karen G. Bastidas Gomez, Hugo R. Zea Ramirez, Manuel F. Ribeiro Pereira, Cesar A. Sierra Avila, Juan A. Clavijo Morales

Abstract:

The contamination of water sources with heavy metals such as mercury has been an environmental problem; it has generated a high impact on the environment and human health. In countries such as Colombia, mercury contamination due to mining has reached levels much higher than the world average. This work proposes the use of fique fibers as adsorbent in mercury removal. The evaluation of the material was carried out under five different conditions (raw, pretreated by organosolv, functionalized by TEMPO oxidation, fiber functionalized plus MOF-199 and fiber functionalized plus MOF-199-SH). All the materials were characterized using FTIR, SEM, EDX, XRD, and TGA. Regarding the mercury removal, it was done under room pressure and temperature, also pH = 7 for all materials presentations, followed by Atomic Absorption Spectroscopy. The high cellulose content in fique is the main particularity of this lignocellulosic biomass since the degree of oxidation depends on the number of hydroxyl groups on the surface capable of oxidizing into carboxylic acids, a functional group capable of increasing ion exchange with mercury in solution. It was also expected that the impregnation of the MOF would increase the mercury removal; however, it was found that the functionalized fique achieved a greater percentage of removal, resulting in 81.33% of removal, 44% for the fique with the MOF-199 and 72% for the MOF-199-SH with. The pretreated fiber and raw also showed 74% and 56%, respectively, which indicates that fique does not require considerable modifications in its structure to achieve good performances. Even so, the functionalized fiber increases the percentage of removal considerably compared to the pretreated fique, which suggests that the functionalization process is a feasible procedure to apply with the purpose of improving the removal percentage. In addition, this is a procedure that follows a green approach since the reagents involved have low environmental impact, and the contribution to the remediation of natural resources is high.

Keywords: biomass, nanotechnology, science materials, wastewater treatment

Procedia PDF Downloads 117
24418 The Diverse and Flexible Coping Strategies Simulation for Maanshan Nuclear Power Plant

Authors: Chin-Hsien Yeh, Shao-Wen Chen, Wen-Shu Huang, Chun-Fu Huang, Jong-Rong Wang, Jung-Hua Yang, Yuh-Ming Ferng, Chunkuan Shih

Abstract:

In this research, a Fukushima-like conditions is simulated with TRACE and RELAP5. Fukushima Daiichi Nuclear Power Plant (NPP) occurred the disaster which caused by the earthquake and tsunami. This disaster caused extended loss of all AC power (ELAP). Hence, loss of ultimate heat sink (LUHS) happened finally. In order to handle Fukushima-like conditions, Taiwan Atomic Energy Council (AEC) commanded that Taiwan Power Company should propose strategies to ensure the nuclear power plant safety. One of the diverse and flexible coping strategies (FLEX) is a different water injection strategy. It can execute core injection at 20 Kg/cm2 without depressurization. In this study, TRACE and RELAP5 were used to simulate Maanshan nuclear power plant, which is a three loops PWR in Taiwan, under Fukushima-like conditions and make sure the success criteria of FLEX. Reducing core cooling ability is due to failure of emergency core cooling system (ECCS) in extended loss of all AC power situation. The core water level continues to decline because of the seal leakage, and then FLEX is used to save the core water level and make fuel rods covered by water. The result shows that this mitigation strategy can cool the reactor pressure vessel (RPV) as soon as possible under Fukushima-like conditions, and keep the core water level higher than Top of Active Fuel (TAF). The FLEX can ensure the peak cladding temperature (PCT) below than the criteria 1088.7 K. Finally, the FLEX can provide protection for nuclear power plant and make plant safety.

Keywords: TRACE, RELAP5/MOD3.3, ELAP, FLEX

Procedia PDF Downloads 250