Search results for: network data mining
25204 Corrosion Interaction Between Steel and Acid Mine Drainage: Use of AI Based on Fuzzy Logic
Authors: Maria Luisa de la Torre, Javier Aroba, Jose Miguel Davila, Aguasanta M. Sarmiento
Abstract:
Steel is one of the most widely used materials in polymetallic sulfide mining installations. One of the main problems suffered by these facilities is the economic losses due to the corrosion of this material, which is accelerated and aggravated by the contact with acid waters generated in these mines when sulfides come into contact with oxygen and water. This generation of acidic water, in turn, is accelerated by the presence of acidophilic bacteria. In order to gain a more detailed understanding of this corrosion process and the interaction between steel and acidic water, a laboratory experiment was carried out in which carbon steel plates were introduced into four different solutions for 27 days: distilled water (BK), which tried to assimilate the effect produced by rain on this material, an acid solution from a mine with a high Fe2+/Fe3+ (PO) content, another acid solution of water from another mine with a high Fe3+/Fe2+ (PH) content and, finally, one that reproduced the acid mine water with a high Fe2+/Fe3+ content but in which there were no bacteria (ST). Every 24 hours, physicochemical parameters were measured, and water samples were taken to carry out an analysis of the dissolved elements. The results of these measurements were processed using an explainable AI model based on fuzzy logic. It could be seen that, in all cases, there was an increase in pH, as well as in the concentrations of Fe and, in particular, Fe(II), as a consequence of the oxidation of the steel plates. Proportionally, the increase in Fe concentration was higher in PO and ST than in PH because Fe precipitates were produced in the latter. The rise of Fe(II) was proportionally much higher in PH, especially in the first hours of exposure, because it started from a lower initial concentration of this ion. Although to a lesser extent than in PH, the greater increase in Fe(II) also occurred faster in PO than in ST, a consequence of the action of the catalytic bacteria. On the other hand, Cu concentrations decreased throughout the experiment (with the exception of distilled water, which initially had no Cu, as a result of an electrochemical process that generates a precipitation of Cu together with Fe hydroxides. This decrease is lower in PH because the high total acidity keeps it in solution for a longer time. With the application of an artificial intelligence tool, it has been possible to evaluate the effects of steel corrosion in mining environments, corroborating and extending what was obtained by means of classical statistics.Keywords: acid mine drainage, artificial intelligence, carbon steel, corrosion, fuzzy logic
Procedia PDF Downloads 825203 Speech Perception by Video Hosting Services Actors: Urban Planning Conflicts
Authors: M. Pilgun
Abstract:
The report presents the results of a study of the specifics of speech perception by actors of video hosting services on the material of urban planning conflicts. To analyze the content, the multimodal approach using neural network technologies is employed. Analysis of word associations and associative networks of relevant stimulus revealed the evaluative reactions of the actors. Analysis of the data identified key topics that generated negative and positive perceptions from the participants. The calculation of social stress and social well-being indices based on user-generated content made it possible to build a rating of road transport construction objects according to the degree of negative and positive perception by actors.Keywords: social media, speech perception, video hosting, networks
Procedia PDF Downloads 14725202 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 8925201 Modelling for Roof Failure Analysis in an Underground Cave
Authors: M. Belén Prendes-Gero, Celestino González-Nicieza, M. Inmaculada Alvarez-Fernández
Abstract:
Roof collapse is one of the problems with a higher frequency in most of the mines of all countries, even now. There are many reasons that may cause the roof to collapse, namely the mine stress activities in the mining process, the lack of vigilance and carelessness or the complexity of the geological structure and irregular operations. This work is the result of the analysis of one accident produced in the “Mary” coal exploitation located in northern Spain. In this accident, the roof of a crossroad of excavated galleries to exploit the “Morena” Layer, 700 m deep, collapsed. In the paper, the work done by the forensic team to determine the causes of the incident, its conclusions and recommendations are collected. Initially, the available documentation (geology, geotechnics, mining, etc.) and accident area were reviewed. After that, laboratory and on-site tests were carried out to characterize the behaviour of the rock materials and the support used (metal frames and shotcrete). With this information, different hypotheses of failure were simulated to find the one that best fits reality. For this work, the software of finite differences in three dimensions, FLAC 3D, was employed. The results of the study confirmed that the detachment was originated as a consequence of one sliding in the layer wall, due to the large roof span present in the place of the accident, and probably triggered as a consequence of the existence of a protection pillar insufficient. The results allowed to establish some corrective measures avoiding future risks. For example, the dimensions of the protection zones that must be remained unexploited and their interaction with the crossing areas between galleries, or the use of more adequate supports for these conditions, in which the significant deformations may discourage the use of rigid supports such as shotcrete. At last, a grid of seismic control was proposed as a predictive system. Its efficiency was tested along the investigation period employing three control equipment that detected new incidents (although smaller) in other similar areas of the mine. These new incidents show that the use of explosives produces vibrations which are a new risk factor to analyse in a next future.Keywords: forensic analysis, hypothesis modelling, roof failure, seismic monitoring
Procedia PDF Downloads 11525200 Analysis of the Occurrence of Hydraulic Fracture Phenomena in Roudbar Lorestan Dam
Authors: Masoud Ghaemi, MohammadJafar Hedayati, Faezeh Yousefzadeh, Hoseinali Heydarzadeh
Abstract:
According to the statistics of the International Committee on Large Dams, internal erosion and piping (scour) are major causes of the destruction of earth-fill dams. If such dams are constructed in narrow valleys, the valley walls will increase the arching of the dam body due to the transfer of vertical and horizontal stresses, so the occurrence of hydraulic fracturing in these embankments is more likely. Roudbar Dam in Lorestan is a clay-core pebble earth-fill dam constructed in a relatively narrow valley in western Iran. Three years after the onset of impoundment, there has been a fall in dam behavior. Evaluation of the dam behavior based on the data recorded on the instruments installed inside the dam body and foundation confirms the occurrence of internal erosion in the lower and adjacent parts of the core on the left support (abutment). The phenomenon of hydraulic fracturing is one of the main causes of the onset of internal erosion in this dam. Accordingly, the main objective of this paper is to evaluate the validity of this hypothesis. To evaluate the validity of this hypothesis, the dam behavior during construction and impoundment has been first simulated with a three-dimensional numerical model. Then, using validated empirical equations, the safety factor of the occurrence of hydraulic fracturing phenomenon upstream of the dam score was calculated. Then, using the artificial neural network, the failure time of the given section was predicted based on the maximum stress trend created. The study results show that steep slopes of valley walls, sudden changes in coefficient, and differences in compressibility properties of dam body materials have caused considerable stress transfer from core to adjacent valley walls, especially at its lower levels. This has resulted in the coefficient of confidence of the occurrence of hydraulic fracturing in each of these areas being close to one in each of the empirical equations used.Keywords: arching, artificial neural network, FLAC3D, hydraulic fracturing, internal erosion, pore water pressure
Procedia PDF Downloads 17725199 Conceptual Model of a Residential Waste Collection System Using ARENA Software
Authors: Bruce G. Wilson
Abstract:
The collection of municipal solid waste at the curbside is a complex operation that is repeated daily under varying circumstances around the world. There have been several attempts to develop Monte Carlo simulation models of the waste collection process dating back almost 50 years. Despite this long history, the use of simulation modeling as a planning or optimization tool for waste collection is still extremely limited in practice. Historically, simulation modeling of waste collection systems has been hampered by the limitations of computer hardware and software and by the availability of representative input data. This paper outlines the development of a Monte Carlo simulation model that overcomes many of the limitations contained in previous models. The model uses a general purpose simulation software program that is easily capable of modeling an entire waste collection network. The model treats the stops on a waste collection route as a queue of work to be processed by a collection vehicle (or server). Input data can be collected from a variety of sources including municipal geographic information systems, global positioning system recorders on collection vehicles, and weigh scales at transfer stations or treatment facilities. The result is a flexible model that is sufficiently robust that it can model the collection activities in a large municipality, while providing the flexibility to adapt to changing conditions on the collection route.Keywords: modeling, queues, residential waste collection, Monte Carlo simulation
Procedia PDF Downloads 40025198 Quality of Romanian Food Products on Rapid Alert System for Food and Feed Notifications
Authors: Silvius Stanciu
Abstract:
Romanian food products sold on European markets have been accused of several non-conformities of quality and safety. Most products incriminated last period were those of animal origin, especially meat and meat products. The study proposed an analysis of the notifications made by network members through Rapid Alert System for Food and Feed on products originating in Romania. As a source of information, the Rapid Alert System portal and the official communications of the National Sanitary Veterinary and Food Safety Authority were used. The research results showed that nearly a quarter of network notifications were rejected and were withdrawn by the European Authority. Although national authorities present these issues as success stories of national quality policies, the large number of notifications related to the volume of exported products is worrying. The paper is of practical and applicative importance for both the business environment and the academic environment, laying the basis for a wider research on the quality differences between Romanian and imported products.Keywords: food, quality, RASFF, Rapid Alert System for Food and Feed, Romania
Procedia PDF Downloads 16025197 Emerging Technology for Business Intelligence Applications
Authors: Hsien-Tsen Wang
Abstract:
Business Intelligence (BI) has long helped organizations make informed decisions based on data-driven insights and gain competitive advantages in the marketplace. In the past two decades, businesses witnessed not only the dramatically increasing volume and heterogeneity of business data but also the emergence of new technologies, such as Artificial Intelligence (AI), Semantic Web (SW), Cloud Computing, and Big Data. It is plausible that the convergence of these technologies would bring more value out of business data by establishing linked data frameworks and connecting in ways that enable advanced analytics and improved data utilization. In this paper, we first review and summarize current BI applications and methodology. Emerging technologies that can be integrated into BI applications are then discussed. Finally, we conclude with a proposed synergy framework that aims at achieving a more flexible, scalable, and intelligent BI solution.Keywords: business intelligence, artificial intelligence, semantic web, big data, cloud computing
Procedia PDF Downloads 9425196 A Medical Resource Forecasting Model for Emergency Room Patients with Acute Hepatitis
Authors: R. J. Kuo, W. C. Cheng, W. C. Lien, T. J. Yang
Abstract:
Taiwan is a hyper endemic area for the Hepatitis B virus (HBV). The estimated total number of HBsAg carriers in the general population who are more than 20 years old is more than 3 million. Therefore, a case record review is conducted from January 2003 to June 2007 for all patients with a diagnosis of acute hepatitis who were admitted to the Emergency Department (ED) of a well-known teaching hospital. The cost for the use of medical resources is defined as the total medical fee. In this study, principal component analysis (PCA) is firstly employed to reduce the number of dimensions. Support vector regression (SVR) and artificial neural network (ANN) are then used to develop the forecasting model. A total of 117 patients meet the inclusion criteria. 61% patients involved in this study are hepatitis B related. The computational result shows that the proposed PCA-SVR model has superior performance than other compared algorithms. In conclusion, the Child-Pugh score and echogram can both be used to predict the cost of medical resources for patients with acute hepatitis in the ED.Keywords: acute hepatitis, medical resource cost, artificial neural network, support vector regression
Procedia PDF Downloads 42225195 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 23125194 Fractal-Wavelet Based Techniques for Improving the Artificial Neural Network Models
Authors: Reza Bazargan lari, Mohammad H. Fattahi
Abstract:
Natural resources management including water resources requires reliable estimations of time variant environmental parameters. Small improvements in the estimation of environmental parameters would result in grate effects on managing decisions. Noise reduction using wavelet techniques is an effective approach for pre-processing of practical data sets. Predictability enhancement of the river flow time series are assessed using fractal approaches before and after applying wavelet based pre-processing. Time series correlation and persistency, the minimum sufficient length for training the predicting model and the maximum valid length of predictions were also investigated through a fractal assessment.Keywords: wavelet, de-noising, predictability, time series fractal analysis, valid length, ANN
Procedia PDF Downloads 36825193 Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza
Authors: Hongmei Wang, Ziyun Xiang, Ying liu, Li Yu, Dongsheng Yue
Abstract:
Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT.Keywords: COVID-19, Fastai, influenza, transfer network
Procedia PDF Downloads 14225192 Performance of VSAT MC-CDMA System Using LDPC and Turbo Codes over Multipath Channel
Authors: Hassan El Ghazi, Mohammed El Jourmi, Tayeb Sadiki, Esmail Ahouzi
Abstract:
The purpose of this paper is to model and analyze a geostationary satellite communication system based on VSAT network and Multicarrier CDMA system scheme which presents a combination of multicarrier modulation scheme and CDMA concepts. In this study the channel coding strategies (Turbo codes and LDPC codes) are adopted to achieve good performance due to iterative decoding. The envisaged system is examined for a transmission over Multipath channel with use of Ku band in the uplink case. The simulation results are obtained for each different case. The performance of the system is given in terms of Bit Error Rate (BER) and energy per bit to noise power spectral density ratio (Eb/N0). The performance results of designed system shown that the communication system coded with LDPC codes can achieve better error rate performance compared to VSAT MC-CDMA system coded with Turbo codes.Keywords: satellite communication, VSAT Network, MC-CDMA, LDPC codes, turbo codes, uplink
Procedia PDF Downloads 50425191 DeClEx-Processing Pipeline for Tumor Classification
Authors: Gaurav Shinde, Sai Charan Gongiguntla, Prajwal Shirur, Ahmed Hambaba
Abstract:
Health issues are significantly increasing, putting a substantial strain on healthcare services. This has accelerated the integration of machine learning in healthcare, particularly following the COVID-19 pandemic. The utilization of machine learning in healthcare has grown significantly. We introduce DeClEx, a pipeline that ensures that data mirrors real-world settings by incorporating Gaussian noise and blur and employing autoencoders to learn intermediate feature representations. Subsequently, our convolutional neural network, paired with spatial attention, provides comparable accuracy to state-of-the-art pre-trained models while achieving a threefold improvement in training speed. Furthermore, we provide interpretable results using explainable AI techniques. We integrate denoising and deblurring, classification, and explainability in a single pipeline called DeClEx.Keywords: machine learning, healthcare, classification, explainability
Procedia PDF Downloads 5525190 Japanese and Europe Legal Frameworks on Data Protection and Cybersecurity: Asymmetries from a Comparative Perspective
Authors: S. Fantin
Abstract:
This study is the result of the legal research on cybersecurity and data protection within the EUNITY (Cybersecurity and Privacy Dialogue between Europe and Japan) project, aimed at fostering the dialogue between the European Union and Japan. Based on the research undertaken therein, the author offers an outline of the main asymmetries in the laws governing such fields in the two regions. The research is a comparative analysis of the two legal frameworks, taking into account specific provisions, ratio legis and policy initiatives. Recent doctrine was taken into account, too, as well as empirical interviews with EU and Japanese stakeholders and project partners. With respect to the protection of personal data, the European Union has recently reformed its legal framework with a package which includes a regulation (General Data Protection Regulation), and a directive (Directive 680 on personal data processing in the law enforcement domain). In turn, the Japanese law under scrutiny for this study has been the Act on Protection of Personal Information. Based on a comparative analysis, some asymmetries arise. The main ones refer to the definition of personal information and the scope of the two frameworks. Furthermore, the rights of the data subjects are differently articulated in the two regions, while the nature of sanctions take two opposite approaches. Regarding the cybersecurity framework, the situation looks similarly misaligned. Japan’s main text of reference is the Basic Cybersecurity Act, while the European Union has a more fragmented legal structure (to name a few, Network and Information Security Directive, Critical Infrastructure Directive and Directive on the Attacks at Information Systems). On an relevant note, unlike a more industry-oriented European approach, the concept of cyber hygiene seems to be neatly embedded in the Japanese legal framework, with a number of provisions that alleviate operators’ liability by turning such a burden into a set of recommendations to be primarily observed by citizens. With respect to the reasons to fill such normative gaps, these are mostly grounded on three basis. Firstly, the cross-border nature of cybercrime brings to consider both magnitude of the issue and its regulatory stance globally. Secondly, empirical findings from the EUNITY project showed how recent data breaches and cyber-attacks had shared implications between Europe and Japan. Thirdly, the geopolitical context is currently going through the direction of bringing the two regions to significant agreements from a trade standpoint, but also from a data protection perspective (with an imminent signature by both parts of a so-called ‘Adequacy Decision’). The research conducted in this study reveals two asymmetric legal frameworks on cyber security and data protection. With a view to the future challenges presented by the strengthening of the collaboration between the two regions and the trans-national fashion of cybercrime, it is urged that solutions are found to fill in such gaps, in order to allow European Union and Japan to wisely increment their partnership.Keywords: cybersecurity, data protection, European Union, Japan
Procedia PDF Downloads 12325189 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections
Authors: Anthony D. Rhodes, Manan Goel
Abstract:
We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.Keywords: computer vision, object segmentation, interactive segmentation, model compression
Procedia PDF Downloads 12025188 Evaluation of Arsenic Removal in Soils Contaminated by the Phytoremediation Technique
Authors: V. Ibujes, A. Guevara, P. Barreto
Abstract:
Concentration of arsenic represents a serious threat to human health. It is a bioaccumulable toxic element and is transferred through the food chain. In Ecuador, values of 0.0423 mg/kg As are registered in potatoes of the skirts of the Tungurahua volcano. The increase of arsenic contamination in Ecuador is mainly due to mining activity, since the process of gold extraction generates toxic tailings with mercury. In the Province of Azuay, due to the mining activity, the soil reaches concentrations of 2,500 to 6,420 mg/kg As whereas in the province of Tungurahua it can be found arsenic concentrations of 6.9 to 198.7 mg/kg due to volcanic eruptions. Since the contamination by arsenic, the present investigation is directed to the remediation of the soils in the provinces of Azuay and Tungurahua by phytoremediation technique and the definition of a methodology of extraction by means of analysis of arsenic in the system soil-plant. The methodology consists in selection of two types of plants that have the best arsenic removal capacity in synthetic solutions 60 μM As, a lower percentage of mortality and hydroponics resistance. The arsenic concentrations in each plant were obtained from taking 10 ml aliquots and the subsequent analysis of the ICP-OES (inductively coupled plasma-optical emission spectrometry) equipment. Soils were contaminated with synthetic solutions of arsenic with the capillarity method to achieve arsenic concentration of 13 and 15 mg/kg. Subsequently, two types of plants were evaluated to reduce the concentration of arsenic in soils for 7 weeks. The global variance for soil types was obtained with the InfoStat program. To measure the changes in arsenic concentration in the soil-plant system, the Rhizo and Wenzel arsenic extraction methodology was used and subsequently analyzed with the ICP-OES (optima 8000 Pekin Elmer). As a result, the selected plants were bluegrass and llanten, due to the high percentages of arsenic removal of 55% and 67% and low mortality rates of 9% and 8% respectively. In conclusion, Azuay soil with an initial concentration of 13 mg/kg As reached the concentrations of 11.49 and 11.04 mg/kg As for bluegrass and llanten respectively, and for the initial concentration of 15 mg/kg As reached 11.79 and 11.10 mg/kg As for blue grass and llanten after 7 weeks. For the Tungurahua soil with an initial concentration of 13 mg/kg As it reached the concentrations of 11.56 and 12.16 mg/kg As for the bluegrass and llanten respectively, and for the initial concentration of 15 mg/kg As reached 11.97 and 12.27 mg/kg Ace for bluegrass and llanten after 7 weeks. The best arsenic extraction methodology of soil-plant system is Wenzel.Keywords: blue grass, llanten, phytoremediation, soil of Azuay, soil of Tungurahua, synthetic arsenic solution
Procedia PDF Downloads 10325187 Using Equipment Telemetry Data for Condition-Based maintenance decisions
Authors: John Q. Todd
Abstract:
Given that modern equipment can provide comprehensive health, status, and error condition data via built-in sensors, maintenance organizations have a new and valuable source of insight to take advantage of. This presentation will expose what these data payloads might look like and how they can be filtered, visualized, calculated into metrics, used for machine learning, and generate alerts for further action.Keywords: condition based maintenance, equipment data, metrics, alerts
Procedia PDF Downloads 18825186 Optimal Placement and Sizing of Energy Storage System in Distribution Network with Photovoltaic Based Distributed Generation Using Improved Firefly Algorithms
Authors: Ling Ai Wong, Hussain Shareef, Azah Mohamed, Ahmad Asrul Ibrahim
Abstract:
The installation of photovoltaic based distributed generation (PVDG) in active distribution system can lead to voltage fluctuation due to the intermittent and unpredictable PVDG output power. This paper presented a method in mitigating the voltage rise by optimally locating and sizing the battery energy storage system (BESS) in PVDG integrated distribution network. The improved firefly algorithm is used to perform optimal placement and sizing. Three objective functions are presented considering the voltage deviation and BESS off-time with state of charge as the constraint. The performance of the proposed method is compared with another optimization method such as the original firefly algorithm and gravitational search algorithm. Simulation results show that the proposed optimum BESS location and size improve the voltage stability.Keywords: BESS, firefly algorithm, PVDG, voltage fluctuation
Procedia PDF Downloads 32125185 Interorganizational Relationships in the Brazilian Milk Production Chain
Authors: Marcelo T. Okano, Oduvaldo Vendrametto, Osmildo S. Santos, Marcelo E. Fernandes, Heide Landi
Abstract:
The literature on the interorganizational relationship between companies and organizations has increased in recent years, but there are still doubts about the various settings. The interorganizational networks are important in economic life, the fact facilitate the complex interdependence between transactional and cooperative organizations. A need identified in the literature is the lack of indicators to measure and identify the types of existing networks. The objective of this research is to examine the interorganizational relationships of two milk chains through indicators proposed by the theories of the four authors, characterizing them as network or not and what the benefits obtained by the chain organization. To achieve the objective of this work was carried out a survey of milk producers in two regions of the state of São Paulo. To collect the information needed for the analysis, exploratory research, qualitative nature was used. The research instrument of this work consists of a roadmap of semistructured interviews with open questions. Some of the answers were directed by the interviewer in the form of performance notes aimed at detecting the degree of importance, according to the perception of intensity to that regard. The results showed that interorganizational relationships are small and largely limited to the sale of milk or dairy cooperatives. These relationships relate only to trade relations between the owner and purchaser of milk. But when the producers are organized in associations or networks, interorganizational relationships and increase benefits for all participants in the network. The various visits and interviews in several dairy farms in the regions of São Pau-lo (indicated that the inter-relationships are small and largely limited to the sale of milk to cooperatives or dairy. These relationships refer only to trade relations between the owner and the purchaser of milk. But when the producers are organized in associations or networks, interorganizational relationships increase and bring benefits to all participants in the network.Keywords: interorganizational networks, dairy chain, interorganizational system, São Pau-lo
Procedia PDF Downloads 58025184 Location Tracking of Human Using Mobile Robot and Wireless Sensor Networks
Authors: Muazzam A. Khan
Abstract:
In order to avoid dangerous environmental disasters, robots are being recognized as good entrants to step in as human rescuers. Robots has been gaining interest of many researchers in rescue matters especially which are furnished with advanced sensors. In distributed wireless robot system main objective for a rescue system is to track the location of the object continuously. This paper provides a novel idea to track and locate human in disaster area using stereo vision system and ZigBee technology. This system recursively predict and updates 3D coordinates in a robot coordinate camera system of a human which makes the system cost effective. This system is comprised of ZigBee network which has many advantages such as low power consumption, self-healing low data rates and low cost.Keywords: stereo vision, segmentation, classification, human tracking, ZigBee module
Procedia PDF Downloads 49425183 An Enhanced AODV Routing Protocol for Wireless Sensor and Actuator Networks
Authors: Apidet Booranawong, Wiklom Teerapabkajorndet
Abstract:
An enhanced ad-hoc on-demand distance vector routing (E-AODV) protocol for control system applications in wireless sensor and actuator networks (WSANs) is proposed. Our routing algorithm is designed by considering both wireless network communication and the control system aspects. Control system error and network delay are the main selection criteria in our routing protocol. The control and communication performance is evaluated on multi-hop IEEE 802.15.4 networks for building-temperature control systems. The Gilbert-Elliott error model is employed to simulate packet loss in wireless networks. The simulation results demonstrate that the E-AODV routing approach can significantly improve the communication performance better than an original AODV routing under various packet loss rates. However, the control performance result by our approach is not much improved compared with the AODV routing solution.Keywords: WSANs, building temperature control, AODV routing protocol, control system error, settling time, delay, delivery ratio
Procedia PDF Downloads 33925182 Automated Machine Learning Algorithm Using Recurrent Neural Network to Perform Long-Term Time Series Forecasting
Authors: Ying Su, Morgan C. Wang
Abstract:
Long-term time series forecasting is an important research area for automated machine learning (AutoML). Currently, forecasting based on either machine learning or statistical learning is usually built by experts, and it requires significant manual effort, from model construction, feature engineering, and hyper-parameter tuning to the construction of the time series model. Automation is not possible since there are too many human interventions. To overcome these limitations, this article proposed to use recurrent neural networks (RNN) through the memory state of RNN to perform long-term time series prediction. We have shown that this proposed approach is better than the traditional Autoregressive Integrated Moving Average (ARIMA). In addition, we also found it is better than other network systems, including Fully Connected Neural Networks (FNN), Convolutional Neural Networks (CNN), and Nonpooling Convolutional Neural Networks (NPCNN).Keywords: automated machines learning, autoregressive integrated moving average, neural networks, time series analysis
Procedia PDF Downloads 10525181 Optimal Protection Coordination in Distribution Systems with Distributed Generations
Authors: Abdorreza Rabiee, Shahla Mohammad Hoseini Mirzaei
Abstract:
The advantages of distributed generations (DGs) based on renewable energy sources (RESs) leads to high penetration level of DGs in distribution network. With incorporation of DGs in distribution systems, the system reliability and security, as well as voltage profile, is improved. However, the protection of such systems is still challenging. In this paper, at first, the related papers are reviewed and then a practical scheme is proposed for coordination of OCRs in distribution system with DGs. The coordination problem is formulated as a nonlinear programming (NLP) optimization problem with the object function of minimizing total operating time of OCRs. The proposed method is studied based on a simple test system. The optimization problem is solved by General Algebraic Modeling System (GAMS) to calculate the optimal time dial setting (TDS) and also pickup current setting of OCRs. The results show the effectiveness of the proposed method and its applicability.Keywords: distributed generation, DG, distribution network, over current relay, OCR, protection coordination, pickup current, time dial setting, TDS
Procedia PDF Downloads 13825180 Enhancing Healthcare Delivery in Low-Income Markets: An Exploration of Wireless Sensor Network Applications
Authors: Innocent Uzougbo Onwuegbuzie
Abstract:
Healthcare delivery in low-income markets is fraught with numerous challenges, including limited access to essential medical resources, inadequate healthcare infrastructure, and a significant shortage of trained healthcare professionals. These constraints lead to suboptimal health outcomes and a higher incidence of preventable diseases. This paper explores the application of Wireless Sensor Networks (WSNs) as a transformative solution to enhance healthcare delivery in these underserved regions. WSNs, comprising spatially distributed sensor nodes that collect and transmit health-related data, present opportunities to address critical healthcare needs. Leveraging WSN technology facilitates real-time health monitoring and remote diagnostics, enabling continuous patient observation and early detection of medical issues, especially in areas with limited healthcare facilities and professionals. The implementation of WSNs can enhance the overall efficiency of healthcare systems by enabling timely interventions, reducing the strain on healthcare facilities, and optimizing resource allocation. This paper highlights the potential benefits of WSNs in low-income markets, such as cost-effectiveness, increased accessibility, and data-driven decision-making. However, deploying WSNs involves significant challenges, including technical barriers like limited internet connectivity and power supply, alongside concerns about data privacy and security. Moreover, robust infrastructure and adequate training for local healthcare providers are essential for successful implementation. It further examines future directions for WSNs, emphasizing innovation, scalable solutions, and public-private partnerships. By addressing these challenges and harnessing the potential of WSNs, it is possible to revolutionize healthcare delivery and improve health outcomes in low-income markets.Keywords: wireless sensor networks (WSNs), healthcare delivery, low-Income markets, remote patient monitoring, health data security
Procedia PDF Downloads 3625179 Ethics Can Enable Open Source Data Research
Authors: Dragana Calic
Abstract:
The openness, availability and the sheer volume of big data have provided, what some regard as, an invaluable and rich dataset. Researchers, businesses, advertising agencies, medical institutions, to name only a few, collect, share, and analyze this data to enable their processes and decision making. However, there are important ethical considerations associated with the use of big data. The rapidly evolving nature of online technologies has overtaken the many legislative, privacy, and ethical frameworks and principles that exist. For example, should we obtain consent to use people’s online data, and under what circumstances can privacy considerations be overridden? Current guidance on how to appropriately and ethically handle big data is inconsistent. Consequently, this paper focuses on two quite distinct but related ethical considerations that are at the core of the use of big data for research purposes. They include empowering the producers of data and empowering researchers who want to study big data. The first consideration focuses on informed consent which is at the core of empowering producers of data. In this paper, we discuss some of the complexities associated with informed consent and consider studies of producers’ perceptions to inform research ethics guidelines and practice. The second consideration focuses on the researcher. Similarly, we explore studies that focus on researchers’ perceptions and experiences.Keywords: big data, ethics, producers’ perceptions, researchers’ perceptions
Procedia PDF Downloads 28425178 An IM-COH Algorithm Neural Network Optimization with Cuckoo Search Algorithm for Time Series Samples
Authors: Wullapa Wongsinlatam
Abstract:
Back propagation algorithm (BP) is a widely used technique in artificial neural network and has been used as a tool for solving the time series problems, such as decreasing training time, maximizing the ability to fall into local minima, and optimizing sensitivity of the initial weights and bias. This paper proposes an improvement of a BP technique which is called IM-COH algorithm (IM-COH). By combining IM-COH algorithm with cuckoo search algorithm (CS), the result is cuckoo search improved control output hidden layer algorithm (CS-IM-COH). This new algorithm has a better ability in optimizing sensitivity of the initial weights and bias than the original BP algorithm. In this research, the algorithm of CS-IM-COH is compared with the original BP, the IM-COH, and the original BP with CS (CS-BP). Furthermore, the selected benchmarks, four time series samples, are shown in this research for illustration. The research shows that the CS-IM-COH algorithm give the best forecasting results compared with the selected samples.Keywords: artificial neural networks, back propagation algorithm, time series, local minima problem, metaheuristic optimization
Procedia PDF Downloads 15225177 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)
Authors: Tarek Duzan
Abstract:
Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data
Procedia PDF Downloads 9325176 An Intelligent Thermal-Aware Task Scheduler in Multiprocessor System on a Chip
Authors: Sina Saadati
Abstract:
Multiprocessors Systems-On-Chips (MPSOCs) are used widely on modern computers to execute sophisticated software and applications. These systems include different processors for distinct aims. Most of the proposed task schedulers attempt to improve energy consumption. In some schedulers, the processor's temperature is considered to increase the system's reliability and performance. In this research, we have proposed a new method for thermal-aware task scheduling which is based on an artificial neural network (ANN). This method enables us to consider a variety of factors in the scheduling process. Some factors like ambient temperature, season (which is important for some embedded systems), speed of the processor, computing type of tasks and have a complex relationship with the final temperature of the system. This Issue can be solved using a machine learning algorithm. Another point is that our solution makes the system intelligent So that It can be adaptive. We have also shown that the computational complexity of the proposed method is cheap. As a consequence, It is also suitable for battery-powered systems.Keywords: task scheduling, MOSOC, artificial neural network, machine learning, architecture of computers, artificial intelligence
Procedia PDF Downloads 10325175 Evaluation of the Effectiveness of a Sewage Treatment Plant in Oman: Samail Case Study
Authors: Azza Mohsin Al-Hashami, Reginald Victor
Abstract:
Treatment of wastewater involves physical, chemical, and biological processes to remove the pollutants from wastewater. This study evaluates of the effectiveness of sewage treatment plants (STP) in Samail, Oman. Samail STP has tertiary treatment using conventional activated sludge with surface aeration. The collection of wastewater is through a network with a total length of about 60 km and also by tankers for the areas outside the network. Treated wastewater from this STP is used for the irrigation of vegetation in the STP premises and as a backwash for sand filters. Some treated water is supplied to the Samail municipality, which uses it for the landscaping, road construction, and 'the Million Date Palms' project. In this study, homogenous samples were taken from eight different treatment stages along the treatment continuum for one year, at a frequency of once a month, to evaluate the physical, chemical, and biological parameters. All samples were analyzed using the standard methods for the examination of water and wastewater. The spatial variations in water quality along the continuum are discussed. Despite these variations, the treated wastewater from Samail STP was of good quality, and most of the parameters are within class A category in Oman Standards for wastewater reuse and discharge.Keywords: wastewater, STP, treatment, processes
Procedia PDF Downloads 183