Search results for: data lake
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25078

Search results for: data lake

24718 Evaluation of Moroccan Microalgae Spirulina platensis as a Potential Source of Natural Antioxidants

Authors: T. Ould Bellahcen, A. Amiri, I. Touam, F. Hmimid, A. El Amrani, M. Cherki

Abstract:

The antioxidant activity of three extracts (water, lipidic and ethanolic) prepared from the microalgae Spirulina platensis isolated from Moroccan lake, using 2, 2 diphenyl-1-picrylhydrazyl (DPPH) and 2,2’-azino-bis ethylbenzthiazoline-6-sulfonic acid (ABTS) radical assay, was studied and compared. The obtained results revealed that the IC₅₀ found using DPPH were lower than that of ABTS for all extracts from these planktonic blue-green algae. The high levels of phenolic and flavonoid content were found in the ethanolic extract 0,33 ± 0,01 mg GAE/g dw and 0,21 ± 0,01 mg quercetin/g dw respectively. In addition, using DPPH, the highest activity with IC₅₀ = 0,449 ± 0,083 mg/ml, was found for the ethanolic extract, followed by that of lipidic extract (IC₅₀ = 0,491 ± 0,059 mg/ml). The lowest activity was for the aqueous extract (IC₅₀ = 4,148 ± 0,132 mg/ml). For ABTS, the highest activity was observed for the lipidic extract with IC₅₀ = 0,740 ± 0,012 mg/ml, while, the aqueous extract recorded the lowest activity (IC₅₀ = 6,914 ± 0, 0067 mg/ml). A moderate activity was showed for the ethanolic extract (IC₅₀ = 5,852 ± 0, 0171 mg/ml). It can be concluded from this first study that Spirulina platensis extracts show an interesting antioxidant and antiradicals properties suggesting that this alga could be used as a potential source of antioxidants. A qualitative and quantitative analysis of polyphenol and flavonoids in the extracts using HPLC is in progress so as to study the correlation between the antioxidant activity and chemical composition.

Keywords: Spirulina platensis, antioxidant, DPPH, ABTS

Procedia PDF Downloads 163
24717 Identifying Critical Success Factors for Data Quality Management through a Delphi Study

Authors: Maria Paula Santos, Ana Lucas

Abstract:

Organizations support their operations and decision making on the data they have at their disposal, so the quality of these data is remarkably important and Data Quality (DQ) is currently a relevant issue, the literature being unanimous in pointing out that poor DQ can result in large costs for organizations. The literature review identified and described 24 Critical Success Factors (CSF) for Data Quality Management (DQM) that were presented to a panel of experts, who ordered them according to their degree of importance, using the Delphi method with the Q-sort technique, based on an online questionnaire. The study shows that the five most important CSF for DQM are: definition of appropriate policies and standards, control of inputs, definition of a strategic plan for DQ, organizational culture focused on quality of the data and obtaining top management commitment and support.

Keywords: critical success factors, data quality, data quality management, Delphi, Q-Sort

Procedia PDF Downloads 211
24716 Geotechnical and Mineralogical Properties of Clay Soils in the Second Organized Industrial Region, Konya, Turkey

Authors: Mustafa Yıldız, Ali Ulvi Uzer, Murat Olgun

Abstract:

In this study, geotechnical and mineralogical properties of gypsum containing clay basis which form the ground of Second Organized Industrial Zone in Konya province have been researched through comprehensive field and laboratory experiments. Although sufficient geotechnical research has not been performed yet, an intensive structuring in the region continues at present. The study area consists of mid-lake sediments formed by gypsum containing soft silt-clay basis which evolves to a large area. To determine the soil profile and geotechnical specifications; 18 drilling holes were opened and disturbed / undisturbed soil samples have been taken through shelby tubes within 1.5m intervals. Tests have been performed on these samples to designate the index and strength properties of soil. Besides, at all drilling holes Standart Penetration Tests have been done within 1.5m intervals. For the purpose of determining the mineralogical characteristics of the soil; all rock and X-RD analysis have been carried out on 6 samples which were taken from various depths through the soil profile. Strength and compressibility characteristics of the soil were defined with correlations using laboratory and field test results. Unconfined compressive strength, undrained cohesion, compression index varies between 16 kN/m2 and 405.4 kN/m2, 6.5 kN/m2 and 72 kN/m2, 0.066 and 0.864, respectively.

Keywords: Konya second organized industrial region, strength, compressibility, soft clay

Procedia PDF Downloads 306
24715 Data Mining in Medicine Domain Using Decision Trees and Vector Support Machine

Authors: Djamila Benhaddouche, Abdelkader Benyettou

Abstract:

In this paper, we used data mining to extract biomedical knowledge. In general, complex biomedical data collected in studies of populations are treated by statistical methods, although they are robust, they are not sufficient in themselves to harness the potential wealth of data. For that you used in step two learning algorithms: the Decision Trees and Support Vector Machine (SVM). These supervised classification methods are used to make the diagnosis of thyroid disease. In this context, we propose to promote the study and use of symbolic data mining techniques.

Keywords: biomedical data, learning, classifier, algorithms decision tree, knowledge extraction

Procedia PDF Downloads 550
24714 Analysis of Different Classification Techniques Using WEKA for Diabetic Disease

Authors: Usama Ahmed

Abstract:

Data mining is the process of analyze data which are used to predict helpful information. It is the field of research which solve various type of problem. In data mining, classification is an important technique to classify different kind of data. Diabetes is most common disease. This paper implements different classification technique using Waikato Environment for Knowledge Analysis (WEKA) on diabetes dataset and find which algorithm is suitable for working. The best classification algorithm based on diabetic data is Naïve Bayes. The accuracy of Naïve Bayes is 76.31% and take 0.06 seconds to build the model.

Keywords: data mining, classification, diabetes, WEKA

Procedia PDF Downloads 142
24713 Cultural Approach to Batak Toba Folklore

Authors: Maritess A. Rulona

Abstract:

Cultural appropriation on traditional symbols has been a worldwide problem. Indonesia’s Batak Toba, an indigenous people group has experienced such appropriation. Bataknese has rich cultural heritage and oral traditions. Their cultural symbols originated from their folklores namely myths, legends, and folktales. This research used both oral traditions and cultural symbols of Batak Toba for a comparative analysis of their ancient and modern practices. This is anchored on Franz Boas’ Cultural Relativism in analyzing their five common cultural symbols. Further, it also utilized Stith Thompson’s Motif-Index to determine the common motif evident in their ten folklores. Ten Batak Toba key respondents provided information in this study. Some informants were also featured in the 20-minute documentary of this study. Thus, the findings were: 1) Traditional customs such as weddings, burial, and reburial are still observed using their cultural symbols; 2) The five most common cultural symbols are Ulos Ragidup, Sigale Gale, Rumah Bolon, Lake Toba, and Gondang; and 3) Batak culture values animals such as buffalo, lizard, and goldfish since they have ancient beliefs of mythical creatures; In conclusion, this study proved that there is a clear connection between the tribe’s oral traditions and cultural symbols. With these findings, this study recommends that elder Bataks teach younger Batak to be immersed in the cultural practices and to incorporate their traditional practices in their modern events.

Keywords: batak toba, cultural appropriation, motif-index, oral tradition, cultural emblems

Procedia PDF Downloads 99
24712 Comprehensive Study of Data Science

Authors: Asifa Amara, Prachi Singh, Kanishka, Debargho Pathak, Akshat Kumar, Jayakumar Eravelly

Abstract:

Today's generation is totally dependent on technology that uses data as its fuel. The present study is all about innovations and developments in data science and gives an idea about how efficiently to use the data provided. This study will help to understand the core concepts of data science. The concept of artificial intelligence was introduced by Alan Turing in which the main principle was to create an artificial system that can run independently of human-given programs and can function with the help of analyzing data to understand the requirements of the users. Data science comprises business understanding, analyzing data, ethical concerns, understanding programming languages, various fields and sources of data, skills, etc. The usage of data science has evolved over the years. In this review article, we have covered a part of data science, i.e., machine learning. Machine learning uses data science for its work. Machines learn through their experience, which helps them to do any work more efficiently. This article includes a comparative study image between human understanding and machine understanding, advantages, applications, and real-time examples of machine learning. Data science is an important game changer in the life of human beings. Since the advent of data science, we have found its benefits and how it leads to a better understanding of people, and how it cherishes individual needs. It has improved business strategies, services provided by them, forecasting, the ability to attend sustainable developments, etc. This study also focuses on a better understanding of data science which will help us to create a better world.

Keywords: data science, machine learning, data analytics, artificial intelligence

Procedia PDF Downloads 78
24711 Application of Artificial Neural Network Technique for Diagnosing Asthma

Authors: Azadeh Bashiri

Abstract:

Introduction: Lack of proper diagnosis and inadequate treatment of asthma leads to physical and financial complications. This study aimed to use data mining techniques and creating a neural network intelligent system for diagnosis of asthma. Methods: The study population is the patients who had visited one of the Lung Clinics in Tehran. Data were analyzed using the SPSS statistical tool and the chi-square Pearson's coefficient was the basis of decision making for data ranking. The considered neural network is trained using back propagation learning technique. Results: According to the analysis performed by means of SPSS to select the top factors, 13 effective factors were selected, in different performances, data was mixed in various forms, so the different models were made for training the data and testing networks and in all different modes, the network was able to predict correctly 100% of all cases. Conclusion: Using data mining methods before the design structure of system, aimed to reduce the data dimension and the optimum choice of the data, will lead to a more accurate system. Therefore, considering the data mining approaches due to the nature of medical data is necessary.

Keywords: asthma, data mining, Artificial Neural Network, intelligent system

Procedia PDF Downloads 270
24710 Interpreting Privacy Harms from a Non-Economic Perspective

Authors: Christopher Muhawe, Masooda Bashir

Abstract:

With increased Internet Communication Technology(ICT), the virtual world has become the new normal. At the same time, there is an unprecedented collection of massive amounts of data by both private and public entities. Unfortunately, this increase in data collection has been in tandem with an increase in data misuse and data breach. Regrettably, the majority of data breach and data misuse claims have been unsuccessful in the United States courts for the failure of proof of direct injury to physical or economic interests. The requirement to express data privacy harms from an economic or physical stance negates the fact that not all data harms are physical or economic in nature. The challenge is compounded by the fact that data breach harms and risks do not attach immediately. This research will use a descriptive and normative approach to show that not all data harms can be expressed in economic or physical terms. Expressing privacy harms purely from an economic or physical harm perspective negates the fact that data insecurity may result into harms which run counter the functions of privacy in our lives. The promotion of liberty, selfhood, autonomy, promotion of human social relations and the furtherance of the existence of a free society. There is no economic value that can be placed on these functions of privacy. The proposed approach addresses data harms from a psychological and social perspective.

Keywords: data breach and misuse, economic harms, privacy harms, psychological harms

Procedia PDF Downloads 192
24709 Machine Learning Analysis of Student Success in Introductory Calculus Based Physics I Course

Authors: Chandra Prayaga, Aaron Wade, Lakshmi Prayaga, Gopi Shankar Mallu

Abstract:

This paper presents the use of machine learning algorithms to predict the success of students in an introductory physics course. Data having 140 rows pertaining to the performance of two batches of students was used. The lack of sufficient data to train robust machine learning models was compensated for by generating synthetic data similar to the real data. CTGAN and CTGAN with Gaussian Copula (Gaussian) were used to generate synthetic data, with the real data as input. To check the similarity between the real data and each synthetic dataset, pair plots were made. The synthetic data was used to train machine learning models using the PyCaret package. For the CTGAN data, the Ada Boost Classifier (ADA) was found to be the ML model with the best fit, whereas the CTGAN with Gaussian Copula yielded Logistic Regression (LR) as the best model. Both models were then tested for accuracy with the real data. ROC-AUC analysis was performed for all the ten classes of the target variable (Grades A, A-, B+, B, B-, C+, C, C-, D, F). The ADA model with CTGAN data showed a mean AUC score of 0.4377, but the LR model with the Gaussian data showed a mean AUC score of 0.6149. ROC-AUC plots were obtained for each Grade value separately. The LR model with Gaussian data showed consistently better AUC scores compared to the ADA model with CTGAN data, except in two cases of the Grade value, C- and A-.

Keywords: machine learning, student success, physics course, grades, synthetic data, CTGAN, gaussian copula CTGAN

Procedia PDF Downloads 40
24708 Data Access, AI Intensity, and Scale Advantages

Authors: Chuping Lo

Abstract:

This paper presents a simple model demonstrating that ceteris paribus countries with lower barriers to accessing global data tend to earn higher incomes than other countries. Therefore, large countries that inherently have greater data resources tend to have higher incomes than smaller countries, such that the former may be more hesitant than the latter to liberalize cross-border data flows to maintain this advantage. Furthermore, countries with higher artificial intelligence (AI) intensity in production technologies tend to benefit more from economies of scale in data aggregation, leading to higher income and more trade as they are better able to utilize global data.

Keywords: digital intensity, digital divide, international trade, scale of economics

Procedia PDF Downloads 60
24707 Secured Transmission and Reserving Space in Images Before Encryption to Embed Data

Authors: G. R. Navaneesh, E. Nagarajan, C. H. Rajam Raju

Abstract:

Nowadays the multimedia data are used to store some secure information. All previous methods allocate a space in image for data embedding purpose after encryption. In this paper, we propose a novel method by reserving space in image with a boundary surrounded before encryption with a traditional RDH algorithm, which makes it easy for the data hider to reversibly embed data in the encrypted images. The proposed method can achieve real time performance, that is, data extraction and image recovery are free of any error. A secure transmission process is also discussed in this paper, which improves the efficiency by ten times compared to other processes as discussed.

Keywords: secure communication, reserving room before encryption, least significant bits, image encryption, reversible data hiding

Procedia PDF Downloads 409
24706 Identity Verification Using k-NN Classifiers and Autistic Genetic Data

Authors: Fuad M. Alkoot

Abstract:

DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN). 

Keywords: biometrics, genetic data, identity verification, k nearest neighbor

Procedia PDF Downloads 251
24705 A Review on Intelligent Systems for Geoscience

Authors: R Palson Kennedy, P.Kiran Sai

Abstract:

This article introduces machine learning (ML) researchers to the hurdles that geoscience problems present, as well as the opportunities for improvement in both ML and geosciences. This article presents a review from the data life cycle perspective to meet that need. Numerous facets of geosciences present unique difficulties for the study of intelligent systems. Geosciences data is notoriously difficult to analyze since it is frequently unpredictable, intermittent, sparse, multi-resolution, and multi-scale. The first half addresses data science’s essential concepts and theoretical underpinnings, while the second section contains key themes and sharing experiences from current publications focused on each stage of the data life cycle. Finally, themes such as open science, smart data, and team science are considered.

Keywords: Data science, intelligent system, machine learning, big data, data life cycle, recent development, geo science

Procedia PDF Downloads 131
24704 Sustainable Use of Laura Lens during Drought

Authors: Kazuhisa Koda, Tsutomu Kobayashi

Abstract:

Laura Island, which is located about 50 km away from downtown, is a source of water supply in Majuro atoll, which is the capital of the Republic of the Marshall Islands. Low and flat Majuro atoll has neither river nor lake. It is very important for Majuro atoll to ensure the conservation of its water resources. However, up-coning, which is the process of partial rising of the freshwater-saltwater boundary near the water-supply well, was caused by the excess pumping from it during the severe drought in 1998. Up-coning will make the water usage of the freshwater lens difficult. Thus, appropriate water usage is required to prevent up-coning in the freshwater lens because there is no other water source during drought. Numerical simulation of water usage applying SEAWAT model was conducted at the central part of Laura Island, including the water-supply well, which was affected by up-coning. The freshwater lens was created as a result of infiltration of consistent average rainfall. The lens shape was almost the same as the one in 1985. 0 of monthly rainfall and variable daily pump discharge were used to calculate the sustainable pump discharge from the water-supply well. Consequently, the total amount of pump discharge was increased as the daily pump discharge was increased, indicating that it needs more time to recover from up-coning. Thus, a pump standard to reduce the pump intensity is being proposed, which is based on numerical simulation concerning the occurrence of the up-coning phenomenon in Laura Island during the drought.

Keywords: freshwater lens, islands, numerical simulation, sustainable water use

Procedia PDF Downloads 289
24703 The Cultural Significance of Recycling - A Native American Perspective

Authors: Martin A. Curry

Abstract:

Madeline Island is a small island community in Wisconsin, USA. Located in Lake Superior, it has been home to the Anishinaabe/Ojibway people for 1000s of years and is known as Moningwankuaning Minis-"The Island of the Golden Breasted Woodpecker". The community relies on summer tourism as its source of income, with a small population of 400 year-round residents. Supervisor Martin A. Curry (Ojibway/German descent) has been working on a fiscally responsible, environmentally principled and culturally centered approach to waste diversion and recycling. The tenets of this program encompass plastics, paper, food waste, local farming, energy production and art education. Through creative writing for the local newspaper and creative interactions, Martin has worked to engage the community in a more robust interest in waste diversion, including setting up a free-will donation store that incorporates elder volunteering opportunities, a compost program that works with the local community garden, biodiesel production and an art program that works with children from the local island school to make paper, grow local food and paint murals. The entirety of this program is based on the Ojibway concept of Mino-Bimadiiziwiin- "The Good Life" and benefits the community and its guests and represents a microcosm of the global dilemmas of waste and recycling.

Keywords: recycling, waste diversion, island, Native American, art

Procedia PDF Downloads 110
24702 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh

Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila

Abstract:

Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.

Keywords: data culture, data-driven organization, data mesh, data quality for business success

Procedia PDF Downloads 129
24701 Big Data Analysis with RHadoop

Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim

Abstract:

It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.

Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop

Procedia PDF Downloads 432
24700 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: data augmentation, mutex task generation, meta-learning, text classification.

Procedia PDF Downloads 89
24699 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network

Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan

Abstract:

Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.

Keywords: aggregation point, data communication, data aggregation, wireless sensor network

Procedia PDF Downloads 152
24698 Spatial Econometric Approaches for Count Data: An Overview and New Directions

Authors: Paula Simões, Isabel Natário

Abstract:

This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.

Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data

Procedia PDF Downloads 589
24697 A NoSQL Based Approach for Real-Time Managing of Robotics's Data

Authors: Gueidi Afef, Gharsellaoui Hamza, Ben Ahmed Samir

Abstract:

This paper deals with the secret of the continual progression data that new data management solutions have been emerged: The NoSQL databases. They crossed several areas like personalization, profile management, big data in real-time, content management, catalog, view of customers, mobile applications, internet of things, digital communication and fraud detection. Nowadays, these database management systems are increasing. These systems store data very well and with the trend of big data, a new challenge’s store demands new structures and methods for managing enterprise data. The new intelligent machine in the e-learning sector, thrives on more data, so smart machines can learn more and faster. The robotics are our use case to focus on our test. The implementation of NoSQL for Robotics wrestle all the data they acquire into usable form because with the ordinary type of robotics; we are facing very big limits to manage and find the exact information in real-time. Our original proposed approach was demonstrated by experimental studies and running example used as a use case.

Keywords: NoSQL databases, database management systems, robotics, big data

Procedia PDF Downloads 348
24696 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 184
24695 Modeling Activity Pattern Using XGBoost for Mining Smart Card Data

Authors: Eui-Jin Kim, Hasik Lee, Su-Jin Park, Dong-Kyu Kim

Abstract:

Smart-card data are expected to provide information on activity pattern as an alternative to conventional person trip surveys. The focus of this study is to propose a method for training the person trip surveys to supplement the smart-card data that does not contain the purpose of each trip. We selected only available features from smart card data such as spatiotemporal information on the trip and geographic information system (GIS) data near the stations to train the survey data. XGboost, which is state-of-the-art tree-based ensemble classifier, was used to train data from multiple sources. This classifier uses a more regularized model formalization to control the over-fitting and show very fast execution time with well-performance. The validation results showed that proposed method efficiently estimated the trip purpose. GIS data of station and duration of stay at the destination were significant features in modeling trip purpose.

Keywords: activity pattern, data fusion, smart-card, XGboost

Procedia PDF Downloads 240
24694 Evaluation of Geotechnical Parameters at Nubian Habitations in Kurkur Area, Aswan, Egypt

Authors: R. E. Fat-Helbary, A. A. Abdel-latief, M. S. Arfa, Alaa Mostafa

Abstract:

The Egyptian Government proposed a general plan, aiming at constructing new settlements for Nubian in south Aswan in different places around Nasser Lake, one of these settlements in Kurkur area. The Nubian habitations in Wadi Kurkur are located around 30 km southwest of Aswan City. This area are affecting by near distance earthquakes from Kalabsha faults system. The shallow seismic refraction technique was conducted at the study area, to evaluate the soil and rock material quality and geotechnical parameters, in addition to the detection of the subsurface ground model under the study area. The P and S-wave velocities were calculated. The surface layer has P-wave, velocity ranges from 900 m/sec to 1625 m/sec and S-wave velocity ranges from 650 m/sec to 1400 m/sec. On the other hand the bedrock has P-wave velocity ranges from 1300 m/sec to 1980 m/sec and S-wave velocity ranges from 1050 m/sec to1725 m/sec. Measuring Vp and Vs velocities together with bulk density are calculated and used to extract the mechanical properties and geotechnical parameters of the foundation material at the study area. Output of this study is very important for solving the problems, which associated with the construction of various civil engineering purposes, for land use planning and for earthquakes resistant structure design.

Keywords: shallow seismic refraction technique, Kurkur area, p and s-wave velocities, geotechnical parameters, bulk density, Kalabsha faults

Procedia PDF Downloads 423
24693 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the model-agnostic meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to an exponential growth of computation, this paper also proposes a key data extraction method that only extract part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: mutex task generation, data augmentation, meta-learning, text classification.

Procedia PDF Downloads 135
24692 Observations on Cultural Alternative and Environmental Conservation: Populations "Delayed" and Excluded from Health and Public Hygiene Policies in Mexico (1890-1930)

Authors: Marcela Davalos Lopez

Abstract:

The history of the circulation of hygienic knowledge and the consolidation of public health in Latin American cities towards the end of the 19th century is well known. Among them, Mexico City was inserted in international politics, strengthened institutions, medical knowledge, applied parameters of modernity and built sanitary engineering works. Despite the power that this hygienist system achieved, its scope was relative: it cannot be generalized to all cities. From a comparative and contextual analysis, it will be shown that conclusions derived from modern urban historiography present, from our contemporary observations, fractures. Between 1890 and 1930, the small cities and areas surrounding the Mexican capital adapted in their own way the international and federal public health regulations. This will be shown for neighborhoods located around Mexico City and in a medium city, close to the Mexican capital (about 80 km), called Cuernavaca. While the inhabitants of the neighborhoods kept awaiting the evolutionary process and the forms that public hygiene policies were taking (because they were witnesses and affected in their territories), in Cuernavaca, the dictates came as an echo. While the capital was drained, large roads were opened, roundabouts were erected, residents were expelled, and drains, sewers, drinking water pipes, etc., were built; Cuernavaca was sheltered in other times and practices. What was this due to? Undoubtedly, the time and energy that it took politicians and the group of "scientists" to carry out these enormous works in the Mexican capital took them away from addressing the issue in remote villages. It was not until the 20th century that the federal hygiene policy began to be strengthened. Despite this, there are other factors that emphasize the particularities of each site. I would like to draw attention here to the different receptions that each town prepared on public hygiene. We will see that Cuernavaca responded to its own semi-rural culture, history, orography and functions, prolonging for much longer, for example, the use of its deep ravines as sewers. For their part, the neighborhoods surrounding the capital, although affected and excluded from hygienist policies, chose to move away from them and solve the deficiencies with their own resources (they resorted to the waste that was left from the dried lake of Mexico to continue their lake practices). All of this points to a paradox that shapes our contemporary concerns: on the one hand, the benefits derived from medical knowledge and its technological applications (in this work referring particularly to the urban health system) and, on the other, the alteration it caused in environmental settings. Places like Cuernavaca (classified by the nineteenth-century and hygienists of the first decades of the twentieth century as backward), as well as landscapes such as neighborhoods, affected by advances in sanitary engineering, keep in their memory buried practices that we observe today as possible ways to reestablish environmental balances: alternative uses of water; recycling of organic materials; local uses of fauna; various systems for breaking down excreta, and so on. In sum, what the nineteenth and first half of the twentieth centuries graduated as levels of backwardness or progress, turn out to be key information to rethink the routes of environmental conservation. When we return to the observations of the scientists, politicians and lawyers of that period, we find historically rejected cultural alterity. Populations such as Cuernavaca that, due to their history, orography and/or insufficiency of federal policies, kept different relationships with the environment, today give us clues to reorient basic elements of cities: alternative uses of water, waste of raw materials, organic or consumption of local products, among others. It is, therefore, a matter of unearthing the rejected that cries out to emerge to the surface.

Keywords: sanitary hygiene, Mexico city, cultural alterity, environmental conservation, environmental history

Procedia PDF Downloads 162
24691 Influence of the Location of Flood Embankments on the Condition of Oxbow Lakes and Riparian Forests: A Case Study of the Middle Odra River Beds on the Example of Dragonflies (Odonata), Ground Beetles (Coleoptera: Carabidae) and Plant Communities

Authors: Magda Gorczyca, Zofia Nocoń

Abstract:

Past and current studies from different countries showed that river engineering leads to environmental degradation and extinction of many species - often those protected by local and international wildlife conservation laws. Through the years, the main focus of rivers utilization has shifted from industrial applications to recreation and wildlife preservation with a focus on keeping the biodiversity which plays a significant role in preventing climate changes. Thus an opportunity appeared to recreate flooding areas and natural habitats, which are very rare in the scale of Europe. Additionally, river restoration helps to avoid floodings and periodic droughts, which are usually very damaging to the economy. In this research, the biodiversity of dragonflies and ground beetles was analyzed in the context of plant communities and forest stands structure. Results were enriched with data from past and current literature. A comparison was made between two parts of the Odra river. A part where oxbow lake and riparian forest were separated from the river bed by embankment and a part of the river with floodplains left intact. Validity assessment of embankments relocation was made based on the research results. In the period between May and September, insects were collected, phytosociological analysis were taken, and forest stand structure properties were specified. In the part of the river not separated by the embankments, rare and protected species of plants were spotted (e.g., Trapanatans, Salvinianatans) as well as greater species and quantitive diversity of dragonfly. Ground beetles fauna, though, was richer in the area separated by the embankment. Even though the research was done during only one season and in a limited area, the results can be a starting point for further extended research and may contribute to acquiring legal wildlife protection and restoration of the researched area. During the research, the presence of invasive species Impatiens parviflora, Echinocystislobata, and Procyonlotor were observed, which may lead to loss of the natural values of the researched areas.

Keywords: carabidae, floodplains, middle Odra river, Odonata, oxbow lakes, riparian forests

Procedia PDF Downloads 138
24690 Revolutionizing Traditional Farming Using Big Data/Cloud Computing: A Review on Vertical Farming

Authors: Milind Chaudhari, Suhail Balasinor

Abstract:

Due to massive deforestation and an ever-increasing population, the organic content of the soil is depleting at a much faster rate. Due to this, there is a big chance that the entire food production in the world will drop by 40% in the next two decades. Vertical farming can help in aiding food production by leveraging big data and cloud computing to ensure plants are grown naturally by providing the optimum nutrients sunlight by analyzing millions of data points. This paper outlines the most important parameters in vertical farming and how a combination of big data and AI helps in calculating and analyzing these millions of data points. Finally, the paper outlines how different organizations are controlling the indoor environment by leveraging big data in enhancing food quantity and quality.

Keywords: big data, IoT, vertical farming, indoor farming

Procedia PDF Downloads 171
24689 Data Challenges Facing Implementation of Road Safety Management Systems in Egypt

Authors: A. Anis, W. Bekheet, A. El Hakim

Abstract:

Implementing a Road Safety Management System (SMS) in a crowded developing country such as Egypt is a necessity. Beginning a sustainable SMS requires a comprehensive reliable data system for all information pertinent to road crashes. In this paper, a survey for the available data in Egypt and validating it for using in an SMS in Egypt. The research provides some missing data, and refer to the unavailable data in Egypt, looking forward to the contribution of the scientific society, the authorities, and the public in solving the problem of missing or unreliable crash data. The required data for implementing an SMS in Egypt are divided into three categories; the first is available data such as fatality and injury rates and it is proven in this research that it may be inconsistent and unreliable, the second category of data is not available, but it may be estimated, an example of estimating vehicle cost is available in this research, the third is not available and can be measured case by case such as the functional and geometric properties of a facility. Some inquiries are provided in this research for the scientific society, such as how to improve the links among stakeholders of road safety in order to obtain a consistent, non-biased, and reliable data system.

Keywords: road safety management system, road crash, road fatality, road injury

Procedia PDF Downloads 137