Search results for: data block
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26029

Search results for: data block

25489 Geophysical Exploration of Aquifer Zones by (Ves) Method at Ayma-Kharagpur, District Paschim Midnapore, West Bengal

Authors: Mayank Sharma

Abstract:

Groundwater has been a matter of great concern in the past years due to the depletion in the water table. This has resulted from the over-exploitation of groundwater resources. Sub-surface exploration of groundwater is a great way to identify the groundwater potential of an area. Thus, in order to meet the water needs for irrigation in the study area, there was a need for a tube well to be installed. Therefore, a Geophysical investigation was carried out to find the most suitable point of drilling and sinking of tube well that encounters an aquifer. Hence, an electrical resistivity survey of geophysical exploration was used to know the aquifer zones of the area. The Vertical Electrical Sounding (VES) method was employed to know the subsurface geology of the area. Seven vertical electrical soundings using Schlumberger electrode array were carried out, having the maximum AB electrode separation of 700m at selected points in Ayma, Kharagpur-1 block of Paschim Midnapore district, West Bengal. The VES was done using an IGIS DDR3 Resistivity meter up to an approximate depth of 160-180m. The data was interpreted, processed and analyzed. Based on all the interpretations using the direct method, the geology of the area at the points of sounding was interpreted. It was established that two deeper clay-sand sections exist in the area at a depth of 50-70m (having resistivity range of 40-60ohm-m) and 70-160m (having resistivity range of 25-35ohm-m). These aquifers will provide a high yield of water which would be sufficient for the desired irrigation in the study area.

Keywords: VES method, Schlumberger method, electrical resistivity survey, geophysical exploration

Procedia PDF Downloads 198
25488 Application of Artificial Neural Network Technique for Diagnosing Asthma

Authors: Azadeh Bashiri

Abstract:

Introduction: Lack of proper diagnosis and inadequate treatment of asthma leads to physical and financial complications. This study aimed to use data mining techniques and creating a neural network intelligent system for diagnosis of asthma. Methods: The study population is the patients who had visited one of the Lung Clinics in Tehran. Data were analyzed using the SPSS statistical tool and the chi-square Pearson's coefficient was the basis of decision making for data ranking. The considered neural network is trained using back propagation learning technique. Results: According to the analysis performed by means of SPSS to select the top factors, 13 effective factors were selected, in different performances, data was mixed in various forms, so the different models were made for training the data and testing networks and in all different modes, the network was able to predict correctly 100% of all cases. Conclusion: Using data mining methods before the design structure of system, aimed to reduce the data dimension and the optimum choice of the data, will lead to a more accurate system. Therefore, considering the data mining approaches due to the nature of medical data is necessary.

Keywords: asthma, data mining, Artificial Neural Network, intelligent system

Procedia PDF Downloads 275
25487 Prediction of CO2 Concentration in the Korea Train Express (KTX) Cabins

Authors: Yong-Il Lee, Do-Yeon Hwang, Won-Seog Jeong, Duckshin Park

Abstract:

Recently, because of the high-speed trains forced ventilation, it is important to control the ventilation. The ventilation is for controlling various contaminants, temperature, and humidity. The high-speed train route is straight to a destination having a high speed. And there are many mountainous areas in Korea. So, tunnel rate is higher then other country. KTX HVAC block off the outdoor air, when entering tunnel. So the high tunnel rate is an effect of ventilation in the KTX cabin. It is important to reduction rate in CO2 concentration prediction. To meet the air quality of the public transport vehicles recommend standards, the KTX cabin of CO2 concentration should be managed. In this study, the concentration change was predicted by CO2 prediction simulation in route to be opened.

Keywords: CO2 prediction, KTX, ventilation, infrastructure and transportation engineering

Procedia PDF Downloads 547
25486 Contribution in Fatigue Life Prediction of Composite Material

Authors: Mostefa Bendouba, Djebli Abdelkader, Abdelkrim Aid, Mohamed Benguediab

Abstract:

The damage evolution mechanism is one of the important focuses of fatigue behaviour investigation of composite materials and also is the foundation to predict fatigue life of composite structures for engineering application. This paper is dedicated to a damage investigation under two block loading cycle fatigue conditions submitted to composite material. The loading sequence effect and the influence of the cycle ratio of the first stage on the cumulative fatigue life were studied herein. Two loading sequences, i.e., high-to-low and low-to-high cases are considered in this paper. The proposed damage indicator is connected cycle by cycle to the S-N curve and the experimental results are in agreement with model expectations. Some experimental researches are used to validate this proposition.

Keywords: fatigue, damage acumulation, composite, evolution

Procedia PDF Downloads 502
25485 Interpreting Privacy Harms from a Non-Economic Perspective

Authors: Christopher Muhawe, Masooda Bashir

Abstract:

With increased Internet Communication Technology(ICT), the virtual world has become the new normal. At the same time, there is an unprecedented collection of massive amounts of data by both private and public entities. Unfortunately, this increase in data collection has been in tandem with an increase in data misuse and data breach. Regrettably, the majority of data breach and data misuse claims have been unsuccessful in the United States courts for the failure of proof of direct injury to physical or economic interests. The requirement to express data privacy harms from an economic or physical stance negates the fact that not all data harms are physical or economic in nature. The challenge is compounded by the fact that data breach harms and risks do not attach immediately. This research will use a descriptive and normative approach to show that not all data harms can be expressed in economic or physical terms. Expressing privacy harms purely from an economic or physical harm perspective negates the fact that data insecurity may result into harms which run counter the functions of privacy in our lives. The promotion of liberty, selfhood, autonomy, promotion of human social relations and the furtherance of the existence of a free society. There is no economic value that can be placed on these functions of privacy. The proposed approach addresses data harms from a psychological and social perspective.

Keywords: data breach and misuse, economic harms, privacy harms, psychological harms

Procedia PDF Downloads 197
25484 Deep Mill Level Zone (DMLZ) of Ertsberg East Skarn System, Papua; Correlation between Structure and Mineralization to Determined Characteristic Orebody of DMLZ Mine

Authors: Bambang Antoro, Lasito Soebari, Geoffrey de Jong, Fernandy Meiriyanto, Michael Siahaan, Eko Wibowo, Pormando Silalahi, Ruswanto, Adi Budirumantyo

Abstract:

The Ertsberg East Skarn System (EESS) is located in the Ertsberg Mining District, Papua, Indonesia. EESS is a sub-vertical zone of copper-gold mineralization hosted in both diorite (vein-style mineralization) and skarn (disseminated and vein style mineralization). Deep Mill Level Zone (DMLZ) is a mining zone in the lower part of East Ertsberg Skarn System (EESS) that product copper and gold. The Deep Mill Level Zone deposit is located below the Deep Ore Zone deposit between the 3125m to 2590m elevation, measures roughly 1,200m in length and is between 350 and 500m in width. DMLZ planned start mined on Q2-2015, being mined at an ore extraction rate about 60,000 tpd by the block cave mine method (the block cave contain 516 Mt). Mineralization and associated hydrothermal alteration in the DMLZ is hosted and enclosed by a large stock (The Main Ertsberg Intrusion) that is barren on all sides and above the DMLZ. Late porphyry dikes that cut through the Main Ertsberg Intrusion are spatially associated with the center of the DMLZ hydrothermal system. DMLZ orebody hosted in diorite and skarn, both dominantly by vein style mineralization. Percentage Material Mined at DMLZ compare with current Reserves are diorite 46% (with 0.46% Cu; 0.56 ppm Au; and 0.83% EqCu); Skarn is 39% (with 1.4% Cu; 0.95 ppm Au; and 2.05% EqCu); Hornfels is 8% (with 0.84% Cu; 0.82 ppm Au; and 1.39% EqCu); and Marble 7 % possible mined waste. Correlation between Ertsberg intrusion, major structure, and vein style mineralization is important to determine characteristic orebody in DMLZ Mine. Generally Deep Mill Level Zone has 2 type of vein filling mineralization from both hosted (diorite and skarn), in diorite hosted the vein system filled by chalcopyrite-bornite-quartz and pyrite, in skarn hosted the vein filled by chalcopyrite-bornite-pyrite and magnetite without quartz. Based on orientation the stockwork vein at diorite hosted and shallow vein in skarn hosted was generally NW-SE trending and NE-SW trending with shallow-moderate dipping. Deep Mill Level Zone control by two main major faults, geologist founded and verified local structure between major structure with NW-SE trending and NE-SW trending with characteristics slickenside, shearing, gauge, water-gas channel, and some has been re-healed.

Keywords: copper-gold, DMLZ, skarn, structure

Procedia PDF Downloads 502
25483 An Approach towards Designing an Energy Efficient Building through Embodied Energy Assessment: A Case of Apartment Building in Composite Climate

Authors: Ambalika Ekka

Abstract:

In today’s world, the growing demand for urban built forms has resulted in the production and consumption of building materials i.e. embodied energy in building construction, leading to pollution and greenhouse gas (GHG) emissions. Therefore, new buildings will offer a unique opportunity to implement more energy efficient building without compromising on building performance of the building. Embodied energy of building materials forms major contribution to embodied energy in buildings. The paper results in an approach towards designing an energy efficient apartment building through embodied energy assessment. This paper discusses the trend of residential development in Rourkela, which includes three case studies of the contemporary houses, followed by architectural elements, number of storeys, predominant material use and plot sizes using primary data. It results in identification of predominant material used and other characteristics in urban area. Further, the embodied energy coefficients of various dominant building materials and alternative materials manufactured in Indian Industry is taken in consideration from secondary source i.e. literature study. The paper analyses the embodied energy by estimating materials and operational energy of proposed building followed by altering the specifications of the materials based on the building components i.e. walls, flooring, windows, insulation and roof through res build India software and comparison of different options is assessed with consideration of sustainable parameters. This paper results that autoclaved aerated concrete block only reaches the energy performance Index benchmark i.e. 69.35 kWh/m2 yr i.e. by saving 4% of operational energy and as embodied energy has no particular index, out of all materials it has the highest EE 23206202.43  MJ.

Keywords: energy efficient, embodied energy, EPI, building materials

Procedia PDF Downloads 197
25482 Machine Learning Analysis of Student Success in Introductory Calculus Based Physics I Course

Authors: Chandra Prayaga, Aaron Wade, Lakshmi Prayaga, Gopi Shankar Mallu

Abstract:

This paper presents the use of machine learning algorithms to predict the success of students in an introductory physics course. Data having 140 rows pertaining to the performance of two batches of students was used. The lack of sufficient data to train robust machine learning models was compensated for by generating synthetic data similar to the real data. CTGAN and CTGAN with Gaussian Copula (Gaussian) were used to generate synthetic data, with the real data as input. To check the similarity between the real data and each synthetic dataset, pair plots were made. The synthetic data was used to train machine learning models using the PyCaret package. For the CTGAN data, the Ada Boost Classifier (ADA) was found to be the ML model with the best fit, whereas the CTGAN with Gaussian Copula yielded Logistic Regression (LR) as the best model. Both models were then tested for accuracy with the real data. ROC-AUC analysis was performed for all the ten classes of the target variable (Grades A, A-, B+, B, B-, C+, C, C-, D, F). The ADA model with CTGAN data showed a mean AUC score of 0.4377, but the LR model with the Gaussian data showed a mean AUC score of 0.6149. ROC-AUC plots were obtained for each Grade value separately. The LR model with Gaussian data showed consistently better AUC scores compared to the ADA model with CTGAN data, except in two cases of the Grade value, C- and A-.

Keywords: machine learning, student success, physics course, grades, synthetic data, CTGAN, gaussian copula CTGAN

Procedia PDF Downloads 44
25481 Data Access, AI Intensity, and Scale Advantages

Authors: Chuping Lo

Abstract:

This paper presents a simple model demonstrating that ceteris paribus countries with lower barriers to accessing global data tend to earn higher incomes than other countries. Therefore, large countries that inherently have greater data resources tend to have higher incomes than smaller countries, such that the former may be more hesitant than the latter to liberalize cross-border data flows to maintain this advantage. Furthermore, countries with higher artificial intelligence (AI) intensity in production technologies tend to benefit more from economies of scale in data aggregation, leading to higher income and more trade as they are better able to utilize global data.

Keywords: digital intensity, digital divide, international trade, scale of economics

Procedia PDF Downloads 68
25480 An Optimization Tool-Based Design Strategy Applied to Divide-by-2 Circuits with Unbalanced Loads

Authors: Agord M. Pinto Jr., Yuzo Iano, Leandro T. Manera, Raphael R. N. Souza

Abstract:

This paper describes an optimization tool-based design strategy for a Current Mode Logic CML divide-by-2 circuit. Representing a building block for output frequency generation in a RFID protocol based-frequency synthesizer, the circuit was designed to minimize the power consumption for driving of multiple loads with unbalancing (at transceiver level). Implemented with XFAB XC08 180 nm technology, the circuit was optimized through MunEDA WiCkeD tool at Cadence Virtuoso Analog Design Environment ADE.

Keywords: divide-by-2 circuit, CMOS technology, PLL phase locked-loop, optimization tool, CML current mode logic, RF transceiver

Procedia PDF Downloads 464
25479 Secured Transmission and Reserving Space in Images Before Encryption to Embed Data

Authors: G. R. Navaneesh, E. Nagarajan, C. H. Rajam Raju

Abstract:

Nowadays the multimedia data are used to store some secure information. All previous methods allocate a space in image for data embedding purpose after encryption. In this paper, we propose a novel method by reserving space in image with a boundary surrounded before encryption with a traditional RDH algorithm, which makes it easy for the data hider to reversibly embed data in the encrypted images. The proposed method can achieve real time performance, that is, data extraction and image recovery are free of any error. A secure transmission process is also discussed in this paper, which improves the efficiency by ten times compared to other processes as discussed.

Keywords: secure communication, reserving room before encryption, least significant bits, image encryption, reversible data hiding

Procedia PDF Downloads 413
25478 Identity Verification Using k-NN Classifiers and Autistic Genetic Data

Authors: Fuad M. Alkoot

Abstract:

DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN). 

Keywords: biometrics, genetic data, identity verification, k nearest neighbor

Procedia PDF Downloads 258
25477 Selection of Soil Quality Indicators of Rice Cropping Systems Using Minimum Data Set Influenced by Imbalanced Fertilization

Authors: Theresa K., Shanmugasundaram R., Kennedy J. S.

Abstract:

Nutrient supplements are indispensable for raising crops and to reap determining productivity. The nutrient imbalance between replenishment and crop uptake is attempted through the input of inorganic fertilizers. Excessive dumping of inorganic nutrients in soil cause stagnant and decline in yield. Imbalanced N-P-K ratio in the soil exacerbates and agitates the soil ecosystems. The study evaluated the fertilization practices of conventional (CFs), organic and Integrated Nutrient Management system (INM) on soil quality using key indicators and soil quality indices. Twelve rice farming fields of which, ten fields were having conventional cultivation practices, one field each was organic farming based and INM based cultivated under monocropping sequence in the Thondamuthur block of Coimbatore district were fixed and properties viz., physical, chemical and biological were studied for four cropping seasons to determine soil quality index (SQI). SQI was computed for conventional, organic and INM fields. Comparing conventional farming (CF) with organic and INM, CF was recorded with a lower soil quality index. While in organic and INM fields, the higher SQI value of 0.99 and 0.88 respectively were registered. CF₄ received with a super-optimal dose of N (250%) showed a lesser SQI value (0.573) as well as the yield (3.20 t ha⁻¹) and the CF6 which received 125 % N recorded the highest SQI (0.715) and yield (6.20 t ha⁻¹). Likewise, most of the CFs received higher N beyond the level of 125 % except CF₃ and CF₉, which recorded lower yields. CFs which received super-optimal P in the order of CF₆&CF₇>CF₁&CF₁₀ recorded lesser yields except for CF₆. Super-optimal K application also recorded lesser yield in CF₄, CF₇ and CF₉.

Keywords: rice cropping system, soil quality indicators, imbalanced fertilization, yield

Procedia PDF Downloads 159
25476 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition

Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan

Abstract:

Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.

Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models

Procedia PDF Downloads 342
25475 Copolymers of Pyrrole and α,ω-Dithienyl Terminated Poly(ethylene glycol)

Authors: Nesrin Köken, Esin A. Güvel, Nilgün Kızılcan

Abstract:

This work presents synthesis of α,ω-dithienyl terminated poly(ethylene glycol) (PEGTh) capable for further chain extension by either chemical or electrochemical polymerization. PEGTh was characterized by FTIR and 1H-NMR. Further, copolymerization of PEGTh and pyrrole (Py) was performed by chemical oxidative polymerization using ceric (IV) salt as an oxidant (PPy-PEGTh). PEG without end group modification was used directly to prepare copolymers with Py by Ce (IV) salt (PPy-PEG). Block copolymers with mole ratio of pyrrole to PEGTh (PEG) 50:1 and 10:1 were synthesized. The electrical conductivities of copolymers PPy-PEGTh and PPy-PEG were determined by four-point probe technique. Influence of the synthetic route and content of the insulating segment on conductivity and yield of the copolymers were investigated.

Keywords: chemical oxidative polymerization, conducting polymer, poly(ethylene glycol), polypyrrole

Procedia PDF Downloads 363
25474 A Review on Intelligent Systems for Geoscience

Authors: R Palson Kennedy, P.Kiran Sai

Abstract:

This article introduces machine learning (ML) researchers to the hurdles that geoscience problems present, as well as the opportunities for improvement in both ML and geosciences. This article presents a review from the data life cycle perspective to meet that need. Numerous facets of geosciences present unique difficulties for the study of intelligent systems. Geosciences data is notoriously difficult to analyze since it is frequently unpredictable, intermittent, sparse, multi-resolution, and multi-scale. The first half addresses data science’s essential concepts and theoretical underpinnings, while the second section contains key themes and sharing experiences from current publications focused on each stage of the data life cycle. Finally, themes such as open science, smart data, and team science are considered.

Keywords: Data science, intelligent system, machine learning, big data, data life cycle, recent development, geo science

Procedia PDF Downloads 136
25473 Together - A Decentralized Application Connects Ideas and Investors

Authors: Chandragiri Nagadeep, M. V. V. S. Durga, Sadu Mahikshith

Abstract:

Future generation is depended on new ideas and innovations that develops the country economical growth and technology standards so, Startups plays an important role in satisfying above goals. Startups includes support which is given by investing into it by investors but, single digit investors can’t keep supporting one startup and lot of security problems occurs while transferring large funds to startup’s bank account. Targeting security and most supportive funding, TogEther solves these issues by providing a platform where “Crowd Funding” is available in a decentralized way such that funding is done with digital currency called cryptocurrency where transactions are done in a secured way using “Block Chain Technology”. Not only Funding but also Ideas along with their documents can be presented and hosted with help of IPFS (Inter Planetary File System).

Keywords: blockchain, ethereum, web3, reactjs, interplanetary file system, funding

Procedia PDF Downloads 215
25472 Structural Evaluation of Cell-Filled Pavement

Authors: Subrat Roy

Abstract:

This paper describes the findings of a study carried out for evaluating the performance of cell-filled pavement for low volume roads. Details of laboratory investigations and the methodology adopted for construction of cell-filled pavement are presented. The aim of this study is to evaluate the structural behaviour of cement concrete filled cell pavement laid over three different types of subbases (water bound macadam, soil-cement and moorum). A formwork of cells of a thin plastic sheet was used to construct the cell-filled pavements to form flexible, interlocked block pavements. Surface deflections were measured using falling weight deflectometer and benkelman beam methods. Resilient moduli of pavement layers were estimated from the measured deflections. A comparison of deflections obtained from both the methodology is also presented.

Keywords: cell-filled pavement, WBM, FWD, Moorum

Procedia PDF Downloads 297
25471 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 124
25470 High-pressure Crystallographic Characterization of f-block Element Complexes

Authors: Nicholas B. Beck, Thomas E. Albrecht-Schönzart

Abstract:

High-pressure results in decreases in the bond lengths of metal-ligand bonds, which has proven to be incredibly informative in uncovering differences in bonding between lanthanide and actinide complexes. The degree of f-electron contribution to the metal ligand bonds has been observed to increase under pressure by a far greater degree in the actinides than the lanthanides, as revealed by spectroscopic studies. However, the actual changes in bond lengths have yet to be quantified, although computationally predicted. By using high-pressure crystallographic techniques, crystal structures of lanthanide complexes have been obtained at pressures up to 5 GPa for both hard and soft-donor ligands. These studies have revealed some unpredicted changes in the coordination environment as well as provided experimental support to computational results

Keywords: crystallography, high-pressure, lanthanide, materials

Procedia PDF Downloads 107
25469 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh

Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila

Abstract:

Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.

Keywords: data culture, data-driven organization, data mesh, data quality for business success

Procedia PDF Downloads 137
25468 Big Data Analysis with RHadoop

Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim

Abstract:

It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.

Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop

Procedia PDF Downloads 437
25467 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: data augmentation, mutex task generation, meta-learning, text classification.

Procedia PDF Downloads 94
25466 Effects of the Compressive Eocene Tectonic Phase in the Bou Kornine-Ressas-Messella Structure and Surroundings (Northern Tunisia)

Authors: Aymen Arfaoui, Abdelkader Soumaya

Abstract:

The Messalla-Ressas-Bou Kornine (MRB) and Hammamet Korbous (HK) major trending North-South fault zones provide a good opportunity to show the effects of the Eocene compressive phase in northern Tunisia. They acted as paleogeographical boundaries during the Mesozoic and belonged to a significant strike-slip corridor called the «North-South Axis,» extending from the Saharan platform at the South to the Gulf of Tunis at the North. Our study area is situated in a relay zone between two significant strike-slip faults (HK and MRB), separating the Atlas domain from the Pelagian Block. We used a multidisciplinary approach, including fieldwork, stress inversion, and geophysical profiles, to argue the shortening event that affected the study region. The MRB and HK contractional duplex is a privileged area for a local stress field and stress nucleation. The stress inversion of fault slip data reveals an Eocene compression with NW-SE trending SHmax, reactivating most of the ancient Mesozoic normal faults in the region. This shortening phase is represented in the MRB belt by an angular unconformity between the Upper Eocene over various Cretaceous strata. The stress inversion data reveal a compressive tectonic with an average NW-SE trending Shmax. The major N-S faults are reactivated under this shortening as sinistral oblique faults. The orientation of SHmax deviates from NW-SE to E-W near the preexisting deep faults of MRB and HK. This E-W stress direction generated the emerging overlap of Ressas-Messella and blind thrust faults in the Cretaceous deposits. The connection of the sub-meridian reverse faults in depth creates "flower structures" under an E-W local compressive stress. In addition, we detected a reorientation of the SHmax into an N-S direction in the central part of the MRB - HK contractional duplex, creating E-W reverse faults and overlapping zones. Finally, the Eocene compression constituted the first major tectonic phase which inverted the Mesozoic preexisting extensive fault system in Northern Tunisia.

Keywords: Tunisia, eocene compression, tectonic stress field, Bou Kornine-Ressas-Messella

Procedia PDF Downloads 74
25465 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network

Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan

Abstract:

Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.

Keywords: aggregation point, data communication, data aggregation, wireless sensor network

Procedia PDF Downloads 161
25464 Spatial Econometric Approaches for Count Data: An Overview and New Directions

Authors: Paula Simões, Isabel Natário

Abstract:

This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.

Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data

Procedia PDF Downloads 595
25463 Increasing the Speed of the Apriori Algorithm by Dimension Reduction

Authors: A. Abyar, R. Khavarzadeh

Abstract:

The most basic and important decision-making tool for industrial and service managers is understanding the market and customer behavior. In this regard, the Apriori algorithm, as one of the well-known machine learning methods, is used to identify customer preferences. On the other hand, with the increasing diversity of goods and services and the speed of changing customer behavior, we are faced with big data. Also, due to the large number of competitors and changing customer behavior, there is an urgent need for continuous analysis of this big data. While the speed of the Apriori algorithm decreases with increasing data volume. In this paper, the big data PCA method is used to reduce the dimension of the data in order to increase the speed of Apriori algorithm. Then, in the simulation section, the results are examined by generating data with different volumes and different diversity. The results show that when using this method, the speed of the a priori algorithm increases significantly.

Keywords: association rules, Apriori algorithm, big data, big data PCA, market basket analysis

Procedia PDF Downloads 3
25462 A NoSQL Based Approach for Real-Time Managing of Robotics's Data

Authors: Gueidi Afef, Gharsellaoui Hamza, Ben Ahmed Samir

Abstract:

This paper deals with the secret of the continual progression data that new data management solutions have been emerged: The NoSQL databases. They crossed several areas like personalization, profile management, big data in real-time, content management, catalog, view of customers, mobile applications, internet of things, digital communication and fraud detection. Nowadays, these database management systems are increasing. These systems store data very well and with the trend of big data, a new challenge’s store demands new structures and methods for managing enterprise data. The new intelligent machine in the e-learning sector, thrives on more data, so smart machines can learn more and faster. The robotics are our use case to focus on our test. The implementation of NoSQL for Robotics wrestle all the data they acquire into usable form because with the ordinary type of robotics; we are facing very big limits to manage and find the exact information in real-time. Our original proposed approach was demonstrated by experimental studies and running example used as a use case.

Keywords: NoSQL databases, database management systems, robotics, big data

Procedia PDF Downloads 356
25461 Evaluation of Reservoir Quality in Cretaceous Sandstone Complex, Western Flank of Anambra Basin, Southern Nigeria

Authors: Bayole Omoniyi

Abstract:

This study demonstrates the value of outcrops as analogues for evaluating reservoir quality of sandbody in a typical high-sinuosity fluvial system. The study utilized data acquired from selected outcrops in the Campanian-Maastrichtian siliciclastic succession of the western flank of Anambra Basin, southern Nigeria. Textural properties derived from outcrop samples were correlated and compared with porosity and permeability using established standard charts. Porosity was estimated from thin sections of selected samples to reduce uncertainty in the estimates. Following facies classification, 14 distinct facies were grouped into three facies associations (FA1-FA3) and were subsequently modeled as discrete properties in a block-centered Cartesian grid on a scale that captures geometry of principal sandbodies. Porosity and permeability estimated from charts were populated in the grid using comparable geostatistical techniques that reflect their spatial distribution. The resultant models were conditioned to facies property to honour available data. The results indicate a strong control of geometrical parameters on facies distribution, lateral continuity and connectivity with resultant effect on porosity and permeability distribution. Sand-prone FA1 and FA2 display reservoir quality that varies internally from channel axis to margin in each succession. Furthermore, isolated stack pattern of sandbodies reduces static connectivity and thus, increases risk of poor communication between reservoir-quality sandbodies. FA3 is non-reservoir because it is mud-prone. In conclusion, the risk of poor communication between sandbodies may be effectively accentuated in reservoirs that have similar architecture because of thick lateral accretion deposits, usually mudstone, that tend to disconnect good-quality point-bar sandbodies. In such reservoirs, mudstone may act as a barrier to impede flow vertically from one sandbody to another and laterally at the margins of each channel-fill succession in the system. The development plan, therefore, must be designed to effectively mitigate these risks and the risk of stratigraphic compartmentalization for maximum hydrocarbon recovery.

Keywords: analogues, architecture, connectivity, fluvial

Procedia PDF Downloads 27
25460 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 191