Search results for: atomic data
25423 Identity Verification Using k-NN Classifiers and Autistic Genetic Data
Authors: Fuad M. Alkoot
Abstract:
DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN).Keywords: biometrics, genetic data, identity verification, k nearest neighbor
Procedia PDF Downloads 25825422 A Review on Intelligent Systems for Geoscience
Authors: R Palson Kennedy, P.Kiran Sai
Abstract:
This article introduces machine learning (ML) researchers to the hurdles that geoscience problems present, as well as the opportunities for improvement in both ML and geosciences. This article presents a review from the data life cycle perspective to meet that need. Numerous facets of geosciences present unique difficulties for the study of intelligent systems. Geosciences data is notoriously difficult to analyze since it is frequently unpredictable, intermittent, sparse, multi-resolution, and multi-scale. The first half addresses data science’s essential concepts and theoretical underpinnings, while the second section contains key themes and sharing experiences from current publications focused on each stage of the data life cycle. Finally, themes such as open science, smart data, and team science are considered.Keywords: Data science, intelligent system, machine learning, big data, data life cycle, recent development, geo science
Procedia PDF Downloads 13625421 Vibration of Gamma Graphyne with an Attached Mass
Authors: Win-Jin Chang, Haw-Long Lee, Yu-Ching Yang
Abstract:
Atomic finite element simulation is applied to investigate the vibration frequency of a single-layer gamma graphyne with an attached mass for the CCCC, SSSS, CFCF, SFSF boundary conditions using the commercial code ANSYS. The fundamental frequencies of the graphyne sheet are compared with the results of the previous study. The results of the comparison are very good in all considered cases. The attached mass causes a shift in the resonant frequency of the graphyne. The frequencies of the single-layer gamma graphyne with an attached mass for different boundary conditions are obtained, and the order based on the boundary condition is CCCC >SSSS > CFCF> SFSF. The highest frequency shift is obtained when the attached mass is located at the center of the graphyne sheet. This is useful for the design of a highly sensitive graphyne-based mass sensor.Keywords: graphyne, finite element analysis, vibration analysis, frequency shift
Procedia PDF Downloads 21225420 An AFM Approach of RBC Micro and Nanoscale Topographic Features During Storage
Authors: K. Santacruz-Gomez, E. Silva-Campa, S. Álvarez-García, V. Mata-Haro, D. Soto-Puebla, M. Pedroza-Montero
Abstract:
Blood gamma irradiation is the only available method to prevent transfusion-associated graft versus host disease (TA-GVHD). However, when blood is irradiated, determine blood shelf time is crucial. Non-irradiated blood has a self-time from 21 to 35 days when is preserved with an anticoagulated solution and stored at 4°C. During their storage, red blood cells (RBC) undergo a series of biochemical, biomechanical and molecular changes involving what is known as storage lesion (SL). SL include loss of structural integrity of RBC, a decrease of 2,3-diphosphatidylglyceric acid levels, and an increase of both ion potassium concentration and hemoglobin (Hb). On the other hand, Atomic force Microscopy (AFM) represents a versatile tool for a nano-scale high-resolution topographic analysis in biological systems. In order to evaluate SL in irradiated and non-irradiated blood, RBC topography and morphometric parameters were obtained from an AFM XE-BIO system. Cell viability was followed using flow cytometry. Our results showed that early markers as nanoscale roughness, allow us to evaluate blood quality since another perspective.Keywords: AFM, blood γ-irradiation, roughness, storage lesion
Procedia PDF Downloads 53325419 Phytoremediation Potential of Hibiscus Cannabinus L. Grown on Different Soil Cadmium Concentration
Authors: Sarra Arbaoui, Taoufik Bettaieb
Abstract:
Contaminated soils and problems related to them have increasingly become a matter of concern. The most common the contaminants generated by industrial urban emissions and agricultural practices are trace metals). Remediation of trace metals which pollute soils can be carried out using physico-chemical processes. Nevertheless, these techniques damage the soil’s biological activity and require expensive equipment. Phytoremediation is a relatively low-cost technology based on the use of selected plants to remove, degrades or contains pollutants. The potential of kenaf for phytoremediation on Cd-contaminated soil was investigated. kenaf plants have been grown in pots containing different concentrations of cadmium. The observations made were for biomass production and cadmium content in different organs determinate by atomic emission spectrometry. Cadmium transfer from a contaminated soil to plants and into plant tissues are discussed in terms of the Bioconcentration Factor (BCF) and the Transfer Factor (TF). Results showed that Cd was found in kenaf plants at different levels. Tolerance and accumulation potential and biomass productivity indicated that kenaf could be used in phytoremediation.Keywords: kenaf, cadmium, phytoremediation, contaminated soil
Procedia PDF Downloads 52525418 Green Synthesis of Zinc Oxide Nano Particles Using Tomato (Lycopersicon esculentum) Extract and Its Application for Solar Cell
Authors: Prasanta Sutradhar, Mitali Saha
Abstract:
With an increasing awareness of green and clean energy, zinc oxide based solar cells were found to be suitable candidates for cost-effective and environmentally friendly energy conversion devices. In this work, we have reported the green synthesis of zinc oxide nanoparticles (ZnO) by thermal method and under microwave irradiation using the aqueous extract of tomatoes as non-toxic and ecofriendly reducing material. The synthesized ZnO nanoparticles were characterised by UV-Visible spectroscopy (UV-Vis), infra-red spectroscopy (IR), particle size analyser (DLS), scanning electron microscopy (SEM), atomic force microscopy (AFM), and X- ray diffraction study (XRD). A series of ZnO nanocomposites with titanium dioxide nanoparticles (TiO2) and graphene oxide (GO) were prepared for photovoltaic application. Structural and morphological studies of these nanocomposites were carried out using UV-vis, SEM, XRD, and AFM. The current-voltage measurements of the nanocomposites demonstrated enhanced power conversion efficiency of 6.18% in case of ZnO/GO/TiO2 nanocomposite.Keywords: ZnO, green synthesis, microwave, nanocomposites, I-V characteristics
Procedia PDF Downloads 40225417 Effect of cold water immersion on bone mineral metabolism in aging rats
Authors: Irena Baranowska-Bosiacka, Mateusz Bosiacki, Patrycja Kupnicka, Anna Lubkowska, Dariusz Chlubek
Abstract:
Physical activity and a balanced diet are among the key factors of "healthy ageing". Physical effort, including swimming in cold water (including bathing in natural water reservoirs), is widely recognized as a hardening factor, with a positive effect on the mental and physical health. At the same time, there is little scientific evidence to verify this hypothesis. In the literature to date, it is possible to obtain data on the impact of these factors on selected physiological and biochemical parameters of the blood, at the same time there are no results of research on the effect of immersing in cold water on mineral metabolism, especially bones, hence it seems important to perform such an analysis in relation to the key elements such as calcium (Ca), magnesium (Mg) and phosphorus (P). Taking the above into account, a hypothesis was put forward about the possibility of a positive effect of exercise in cold water on mineral metabolism and bone density in aging rats. The aim of the study was to evaluate the effect of an 8-week swimming training on mineral metabolism and bone density in aging rats in response to exercise in cold water (5oC) in comparison to swimming in thermal comfort (36oC) and sedentary (control) rats of both sexes. The examination of the concentration of the examined elements in the bones was carried out using inductively coupled plasma atomic emission spectrometry (ICP-OES). The mineral density of the femurs of the rats was measured using the Hologic Horizon DEXA System® densitometer. The results of our study showed that swimming in cold water affects bone mineral metabolism in aging rats by changing the Ca, Mg, P concentration and at the same time increasing their bone density. In males, a decrease in Mg concentration and no changes in bone density were observed. In the light of the research results, it seems that swimming in cold water may be a factor that positively modifies the bone aging process by improving the mechanisms affecting their density.Keywords: swimming in cold water, adaptation to cold water, bone mineral metabolism, aging
Procedia PDF Downloads 6025416 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh
Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila
Abstract:
Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.Keywords: data culture, data-driven organization, data mesh, data quality for business success
Procedia PDF Downloads 13725415 Proximate and Amino Acid Composition of Amaranthus hybridus (Spinach), Celosia argentea (Cock's Comb) and Solanum nigrum (Black nightshade)
Authors: S. O. Oladeji, I. Saleh, A. U. Adamu, S. A. Fowotade
Abstract:
The proximate composition, trace metal level and amino acid composition of Amaranthus hybridus, Celosia argentea and Solanum nigrum were determined. These vegetables were high in their ash contents. Twelve elements were determined: calcium, chromium, copper, iron, lead, magnesium, nickel, phosphorous, potassium, sodium and zinc using flame photometer, atomic absorption and UV-Visible spectrophotometers. Calcium levels were highest ranged between 145.28±0.38 to 235.62±0.41mg/100g in all the samples followed by phosphorus. Quantitative chromatographic analysis of the vegetables hydrolysates revealed seventeen amino acids with concentration of leucine (6.51 to 6.66±0.21g/16gN) doubling that of isoleucine (2.99 to 3.33±0.21g/16gN) in all the samples while the limiting amino acids were cystine and methionine. The result showed that these vegetables were of high nutritive values and could be adequate used as supplement in diet.Keywords: proximate, amino acids, Amaranthus hybridus, Celosia argentea, Solanum nigrum
Procedia PDF Downloads 40125414 Big Data Analysis with RHadoop
Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim
Abstract:
It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop
Procedia PDF Downloads 43725413 A Mutually Exclusive Task Generation Method Based on Data Augmentation
Authors: Haojie Wang, Xun Li, Rui Yin
Abstract:
In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.Keywords: data augmentation, mutex task generation, meta-learning, text classification.
Procedia PDF Downloads 9425412 Nanostructural Analysis of the Polylactic Acid (PLA) Fibers Functionalized by RF Plasma Treatment
Authors: J. H. O. Nascimento, F. R. Oliveira, K. K. O. S. Silva, J. Neves, V. Teixeira, J. Carneiro
Abstract:
These the aliphatic polyesters such as Polylactic Acid (PLA) in the form of fibers, nanofibers or plastic films, generally possess chemically inert surfaces, free porosity, and surface free energy (ΔG) lesser than 32 mN/m. It is therefore considered a low surface energy material, consequently has a low work of adhesion. For this reason, the products manufactured using these polymers are often subjected to surface treatments in order to change its physic-chemical surface, improving their wettability and the Work of Adhesion (WA). Plasma Radio Frequency low pressure (RF) treatment was performed in order to improve the Work of Adhesion (WA) on PLA fibers. Different parameters, such as, power, ratio of working gas (Argon/Oxygen) and treatment time were used to optimize the plasma conditions to modify the PLA surface properties. With plasma treatment, a significant increase in the work of adhesion on PLA fiber surface was observed. The analysis performed by XPS showed an increase in polar functional groups and the SEM and AFM image revealed a considerable increase in roughness.Keywords: RF plasma, surface modification, PLA fabric, atomic force macroscopic, Nanotechnology
Procedia PDF Downloads 53925411 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network
Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan
Abstract:
Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.Keywords: aggregation point, data communication, data aggregation, wireless sensor network
Procedia PDF Downloads 16125410 Spatial Econometric Approaches for Count Data: An Overview and New Directions
Authors: Paula Simões, Isabel Natário
Abstract:
This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data
Procedia PDF Downloads 59525409 Increasing the Speed of the Apriori Algorithm by Dimension Reduction
Authors: A. Abyar, R. Khavarzadeh
Abstract:
The most basic and important decision-making tool for industrial and service managers is understanding the market and customer behavior. In this regard, the Apriori algorithm, as one of the well-known machine learning methods, is used to identify customer preferences. On the other hand, with the increasing diversity of goods and services and the speed of changing customer behavior, we are faced with big data. Also, due to the large number of competitors and changing customer behavior, there is an urgent need for continuous analysis of this big data. While the speed of the Apriori algorithm decreases with increasing data volume. In this paper, the big data PCA method is used to reduce the dimension of the data in order to increase the speed of Apriori algorithm. Then, in the simulation section, the results are examined by generating data with different volumes and different diversity. The results show that when using this method, the speed of the a priori algorithm increases significantly.Keywords: association rules, Apriori algorithm, big data, big data PCA, market basket analysis
Procedia PDF Downloads 525408 A NoSQL Based Approach for Real-Time Managing of Robotics's Data
Authors: Gueidi Afef, Gharsellaoui Hamza, Ben Ahmed Samir
Abstract:
This paper deals with the secret of the continual progression data that new data management solutions have been emerged: The NoSQL databases. They crossed several areas like personalization, profile management, big data in real-time, content management, catalog, view of customers, mobile applications, internet of things, digital communication and fraud detection. Nowadays, these database management systems are increasing. These systems store data very well and with the trend of big data, a new challenge’s store demands new structures and methods for managing enterprise data. The new intelligent machine in the e-learning sector, thrives on more data, so smart machines can learn more and faster. The robotics are our use case to focus on our test. The implementation of NoSQL for Robotics wrestle all the data they acquire into usable form because with the ordinary type of robotics; we are facing very big limits to manage and find the exact information in real-time. Our original proposed approach was demonstrated by experimental studies and running example used as a use case.Keywords: NoSQL databases, database management systems, robotics, big data
Procedia PDF Downloads 35625407 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis
Authors: C. B. Le, V. N. Pham
Abstract:
In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering
Procedia PDF Downloads 19125406 Probing Neuron Mechanics with a Micropipette Force Sensor
Authors: Madeleine Anthonisen, M. Hussain Sangji, G. Monserratt Lopez-Ayon, Margaret Magdesian, Peter Grutter
Abstract:
Advances in micromanipulation techniques and real-time particle tracking with nanometer resolution have enabled biological force measurements at scales relevant to neuron mechanics. An approach to precisely control and maneuver neurite-tethered polystyrene beads is presented. Analogous to an Atomic Force Microscope (AFM), this multi-purpose platform is a force sensor with imaging acquisition and manipulation capabilities. A mechanical probe composed of a micropipette with its tip fixed to a functionalized bead is used to incite the formation of a neurite in a sample of rat hippocampal neurons while simultaneously measuring the tension in said neurite as the sample is pulled away from the beaded tip. With optical imaging methods, a force resolution of 12 pN is achieved. Moreover, the advantages of this technique over alternatives such as AFM, namely ease of manipulation which ultimately allows higher throughput investigation of the mechanical properties of neurons, is demonstrated.Keywords: axonal growth, axonal guidance, force probe, pipette micromanipulation, neurite tension, neuron mechanics
Procedia PDF Downloads 36725405 Modeling Activity Pattern Using XGBoost for Mining Smart Card Data
Authors: Eui-Jin Kim, Hasik Lee, Su-Jin Park, Dong-Kyu Kim
Abstract:
Smart-card data are expected to provide information on activity pattern as an alternative to conventional person trip surveys. The focus of this study is to propose a method for training the person trip surveys to supplement the smart-card data that does not contain the purpose of each trip. We selected only available features from smart card data such as spatiotemporal information on the trip and geographic information system (GIS) data near the stations to train the survey data. XGboost, which is state-of-the-art tree-based ensemble classifier, was used to train data from multiple sources. This classifier uses a more regularized model formalization to control the over-fitting and show very fast execution time with well-performance. The validation results showed that proposed method efficiently estimated the trip purpose. GIS data of station and duration of stay at the destination were significant features in modeling trip purpose.Keywords: activity pattern, data fusion, smart-card, XGboost
Procedia PDF Downloads 24825404 A Mutually Exclusive Task Generation Method Based on Data Augmentation
Authors: Haojie Wang, Xun Li, Rui Yin
Abstract:
In order to solve the memorization overfitting in the model-agnostic meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to an exponential growth of computation, this paper also proposes a key data extraction method that only extract part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.Keywords: mutex task generation, data augmentation, meta-learning, text classification.
Procedia PDF Downloads 14425403 3D Linear and Cyclic Homo-Peptide Crystals Forged by Supramolecular Swelling Self-Assembly
Authors: Wenliang Song, Yu Zhang, Hua Jin, Il Kim
Abstract:
The self-assembly of the polypeptide (PP) into well-defined structures at different length scales is both biomimetic relevant and fundamentally interesting. Although there are various reports of nanostructures fabricated by the self-assembly of various PPs, directed self-assembly of PP into three-dimensional (3D) hierarchical structure has proven to be difficult, despite their importance for biological applications. Herein, an efficient method has been developed through living polymerization of phenylalanine N-Carboxy anhydride (NCA) towards the linear and cyclic polyphenylalanine, and the new invented swelling methodology can form diverse hierarchical polypeptide crystals. The solvent-dependent self-assembly behaviors of these homopolymers were characterized by high-resolution imaging tools such as atomic force microscopy, transmission electron microscopy, scanning electron microscope. The linear and cyclic polypeptide formed 3D nano hierarchical shapes, such as a sphere, cubic, stratiform and hexagonal star in different solvents. Notably, a crystalline packing model was proposed to explain the formation of 3D nanostructures based on the various diffraction patterns, looking forward to give an insight for their dissimilar shape inflection during the self-assembly process.Keywords: self-assembly, polypeptide, bio-polymer, crystalline polymer
Procedia PDF Downloads 24225402 Revolutionizing Traditional Farming Using Big Data/Cloud Computing: A Review on Vertical Farming
Authors: Milind Chaudhari, Suhail Balasinor
Abstract:
Due to massive deforestation and an ever-increasing population, the organic content of the soil is depleting at a much faster rate. Due to this, there is a big chance that the entire food production in the world will drop by 40% in the next two decades. Vertical farming can help in aiding food production by leveraging big data and cloud computing to ensure plants are grown naturally by providing the optimum nutrients sunlight by analyzing millions of data points. This paper outlines the most important parameters in vertical farming and how a combination of big data and AI helps in calculating and analyzing these millions of data points. Finally, the paper outlines how different organizations are controlling the indoor environment by leveraging big data in enhancing food quantity and quality.Keywords: big data, IoT, vertical farming, indoor farming
Procedia PDF Downloads 17625401 Assessment of Chemical and Physical Properties of Surface Water Resources in Flood Affected Area
Authors: Siti Hajar Ya’acob, Nor Sayzwani Sukri, Farah Khaliz Kedri, Rozidaini Mohd Ghazi, Nik Raihan Nik Yusoff, Aweng A/L Eh Rak
Abstract:
Flood event that occurred in mid-December 2014 in East Coast of Peninsular Malaysia has driven attention from the public nationwide. Apart from loss and damage of properties and belongings, the massive flood event has introduced environmental disturbances on surface water resources in such flood affected area. A study has been conducted to measure the physical and chemical composition of Galas River and Pergau River prior to identification the flood impact towards environmental deterioration in surrounding area. Samples that have been collected were analyzed in-situ using YSI portable instrument and also in the laboratory for acid digestion and heavy metals analysis using Atomic Absorption Spectroscopy (AAS). Results showed that range of temperature (0C), DO (mg/L), Ec (µs/cm), TDS (mg/L), turbidity (NTU), pH, and salinity were 25.05-26.65, 1.51-5.85, 0.032-0.054, 0.022-0.035, 23.2-76.4, 3.46-7.31, and 0.01-0.02 respectively. The results from this study could be used as a primary database to evaluate the status of water quality of the respective river after the massive flood.Keywords: flood, river, heavy metals, AAS
Procedia PDF Downloads 38225400 Data Challenges Facing Implementation of Road Safety Management Systems in Egypt
Authors: A. Anis, W. Bekheet, A. El Hakim
Abstract:
Implementing a Road Safety Management System (SMS) in a crowded developing country such as Egypt is a necessity. Beginning a sustainable SMS requires a comprehensive reliable data system for all information pertinent to road crashes. In this paper, a survey for the available data in Egypt and validating it for using in an SMS in Egypt. The research provides some missing data, and refer to the unavailable data in Egypt, looking forward to the contribution of the scientific society, the authorities, and the public in solving the problem of missing or unreliable crash data. The required data for implementing an SMS in Egypt are divided into three categories; the first is available data such as fatality and injury rates and it is proven in this research that it may be inconsistent and unreliable, the second category of data is not available, but it may be estimated, an example of estimating vehicle cost is available in this research, the third is not available and can be measured case by case such as the functional and geometric properties of a facility. Some inquiries are provided in this research for the scientific society, such as how to improve the links among stakeholders of road safety in order to obtain a consistent, non-biased, and reliable data system.Keywords: road safety management system, road crash, road fatality, road injury
Procedia PDF Downloads 15225399 Sources and Potential Ecological Risks of Heavy Metals in the Sediment Samples From Coastal Area in Ondo, Southwest Nigeria
Authors: Ogundele Lasun Tunde, Ayeku Oluwagbemiga Patrick
Abstract:
Heavy metals are released into the sediments in aquatic environment from both natural and anthropogenic sources and they are considered as worldwide issue due to their deleterious ecological risks and food chain disruption. In this study, sediments samples were collected at three major sites (Awoye, Abereke and Ayetoro) along Ondo coastal area using VanVeen grab sampler. The concentrations of As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, V and Zn were determined by employing Atomic Absorption Spectroscopy (AAS). The combined concentrations data were subjected to Positive Matrix Factorization (PMF) receptor approach for source identification and apportionment. The probable risks that might be posed by heavy metals in the sediment were estimated by potential and integrated ecological risks indices. Among the measured heavy metals, Fe had the average concentrations of 20.38 ± 2.86, 23.56 ± 4.16 and 25.32 ± 4.83 lg/g at Abereke, Awoye and Ayetoro sites, respectively. The PMF resulted in identification of four sources of heavy metals in the sediments. The resolved sources and their percentage contributions were oil exploration (39%), industrial waste/sludge (35%), detrital process (18%) and Mn-sources (8%). Oil exploration activities and industrial wastes are the major sources that contribute heavy metals into the coastal sediments. The major pollutants that posed ecological risks to the local aquatic ecosystem are As, Pb, Cr and Cd (40 B Ei ≤ 80) classifying the sites as moderate risk. The integrate risks values of Awoye, Abereke and Ayetoro are 231.2, 234.0 and 236.4, respectively suggesting that the study areas had a moderate ecological risk. The study showed the suitability of PMF receptor model for source identification of heavy metals in the sediments. Also, the intensive anthropogenic activities and natural sources could largely discharge heavy metals into the study area, which may increase the heavy metal contents of the sediments and further contribute to the associated ecological risk, thus affecting the local aquatic ecosystem.Keywords: positive matrix factorization, sediments, heavy metals, sources, ecological risks
Procedia PDF Downloads 2425398 Evaluation of Pollution in Underground Water from ODO-NLA and OGIJO Metropolis Industrial Areas in Ikorodu
Authors: Zaccheaus Olasupo Apotiola
Abstract:
This study evaluates the level of pollution in underground water from Ogijo and Odo-nla areas in lkorodu, Lagos State. Water sample were collected around various industries and transported in ice packs to the laboratory. Temperature and pH was determined on site, physicochemical parameters and total plate were determined using standard methods, while heavy metal concentration was determined using Atomic Absorption spectrophotometry method. The temperature was observed at a range of 20-28 oC, the pH was observed at a range of 5.64 to 6.91 mol/l and were significantly different (P < 0.05) from one another. The chloride content was observed at a range 70.92 to 163.10 mg/l there was no significant difference (P > 0.05) between sample 40 GAJ and ISUP, but there was significant difference (P < 0.05) between other samples. The acidity value varied from 11.0 – 34.5 (mg/l), the samples had no alkalinity. The Total plate count was found at 20-125 cfu/ml. Asernic, Lead, Cadmium, and Mercury concentration ranged between 0.03 - 0.09, 0.04 - 0.11, 0.00 -0.00, and 0.00 – 0.00(mg/l) respectively. However there was significant difference (p < 0.05) between all samples except for sample 4OGA, 5OGAJ, and 3SUTN that were not significantly different (P > 0.05). The results revealed all samples are not safe for human consumption as the levels of Asernic and Lead are above the maximum value of (0.01 mg/l) recommended by NIS 554 and WHO.Keywords: arsenic, cadmium, lead mercury, WHO
Procedia PDF Downloads 52125397 Big Data-Driven Smart Policing: Big Data-Based Patrol Car Dispatching in Abu Dhabi, UAE
Authors: Oualid Walid Ben Ali
Abstract:
Big Data has become one of the buzzwords today. The recent explosion of digital data has led the organization, either private or public, to a new era towards a more efficient decision making. At some point, business decided to use that concept in order to learn what make their clients tick with phrases like ‘sales funnel’ analysis, ‘actionable insights’, and ‘positive business impact’. So, it stands to reason that Big Data was viewed through green (read: money) colored lenses. Somewhere along the line, however someone realized that collecting and processing data doesn’t have to be for business purpose only, but also could be used for other purposes to assist law enforcement or to improve policing or in road safety. This paper presents briefly, how Big Data have been used in the fields of policing order to improve the decision making process in the daily operation of the police. As example, we present a big-data driven system which is sued to accurately dispatch the patrol cars in a geographic environment. The system is also used to allocate, in real-time, the nearest patrol car to the location of an incident. This system has been implemented and applied in the Emirate of Abu Dhabi in the UAE.Keywords: big data, big data analytics, patrol car allocation, dispatching, GIS, intelligent, Abu Dhabi, police, UAE
Procedia PDF Downloads 49125396 Growth of Non-Polar a-Plane AlGaN Epilayer with High Crystalline Quality and Smooth Surface Morphology
Authors: Abbas Nasir, Xiong Zhang, Sohail Ahmad, Yiping Cui
Abstract:
Non-polar a-plane AlGaN epilayers of high structural quality have been grown on r-sapphire substrate by using metalorganic chemical vapor deposition (MOCVD). A graded non-polar AlGaN buffer layer with variable aluminium concentration was used to improve the structural quality of the non-polar a-plane AlGaN epilayer. The characterisations were carried out by high-resolution X-ray diffraction (HR-XRD), atomic force microscopy (AFM) and Hall effect measurement. The XRD and AFM results demonstrate that the Al-composition-graded non-polar AlGaN buffer layer significantly improved the crystalline quality and the surface morphology of the top layer. A low root mean square roughness 1.52 nm is obtained from AFM, and relatively low background carrier concentration down to 3.9× cm-3 is obtained from Hall effect measurement.Keywords: non-polar AlGaN epilayer, Al composition-graded AlGaN layer, root mean square, background carrier concentration
Procedia PDF Downloads 14425395 Mining Multicity Urban Data for Sustainable Population Relocation
Authors: Xu Du, Aparna S. Varde
Abstract:
In this research, we propose to conduct diagnostic and predictive analysis about the key factors and consequences of urban population relocation. To achieve this goal, urban simulation models extract the urban development trends as land use change patterns from a variety of data sources. The results are treated as part of urban big data with other information such as population change and economic conditions. Multiple data mining methods are deployed on this data to analyze nonlinear relationships between parameters. The result determines the driving force of population relocation with respect to urban sprawl and urban sustainability and their related parameters. Experiments so far reveal that data mining methods discover useful knowledge from the multicity urban data. This work sets the stage for developing a comprehensive urban simulation model for catering to specific questions by targeted users. It contributes towards achieving sustainability as a whole.Keywords: data mining, environmental modeling, sustainability, urban planning
Procedia PDF Downloads 30925394 Model Order Reduction for Frequency Response and Effect of Order of Method for Matching Condition
Authors: Aref Ghafouri, Mohammad javad Mollakazemi, Farhad Asadi
Abstract:
In this paper, model order reduction method is used for approximation in linear and nonlinearity aspects in some experimental data. This method can be used for obtaining offline reduced model for approximation of experimental data and can produce and follow the data and order of system and also it can match to experimental data in some frequency ratios. In this study, the method is compared in different experimental data and influence of choosing of order of the model reduction for obtaining the best and sufficient matching condition for following the data is investigated in format of imaginary and reality part of the frequency response curve and finally the effect and important parameter of number of order reduction in nonlinear experimental data is explained further.Keywords: frequency response, order of model reduction, frequency matching condition, nonlinear experimental data
Procedia PDF Downloads 404