Search results for: Diffie-Hellman algorithm
1372 Binarization and Recognition of Characters from Historical Degraded Documents
Authors: Bency Jacob, S.B. Waykar
Abstract:
Degradations in historical document images appear due to aging of the documents. It is very difficult to understand and retrieve text from badly degraded documents as there is variation between the document foreground and background. Thresholding of such document images either result in broken characters or detection of false texts. Numerous algorithms exist that can separate text and background efficiently in the textual regions of the document; but portions of background are mistaken as text in areas that hardly contain any text. This paper presents a way to overcome these problems by a robust binarization technique that recovers the text from a severely degraded document images and thereby increases the accuracy of optical character recognition systems. The proposed document recovery algorithm efficiently removes degradations from document images. Here we are using the ostus method ,local thresholding and global thresholding and after the binarization training and recognizing the characters in the degraded documents.Keywords: binarization, denoising, global thresholding, local thresholding, thresholding
Procedia PDF Downloads 3421371 Comparative Methods for Speech Enhancement and the Effects on Text-Independent Speaker Identification Performance
Authors: R. Ajgou, S. Sbaa, S. Ghendir, A. Chemsa, A. Taleb-Ahmed
Abstract:
The speech enhancement algorithm is to improve speech quality. In this paper, we review some speech enhancement methods and we evaluated their performance based on Perceptual Evaluation of Speech Quality scores (PESQ, ITU-T P.862). All method was evaluated in presence of different kind of noise using TIMIT database and NOIZEUS noisy speech corpus.. The noise was taken from the AURORA database and includes suburban train noise, babble, car, exhibition hall, restaurant, street, airport and train station noise. Simulation results showed improved performance of speech enhancement for Tracking of non-stationary noise approach in comparison with various methods in terms of PESQ measure. Moreover, we have evaluated the effects of the speech enhancement technique on Speaker Identification system based on autoregressive (AR) model and Mel-frequency Cepstral coefficients (MFCC).Keywords: speech enhancement, pesq, speaker recognition, MFCC
Procedia PDF Downloads 4221370 A Deep Learning Based Integrated Model For Spatial Flood Prediction
Authors: Vinayaka Gude Divya Sampath
Abstract:
The research introduces an integrated prediction model to assess the susceptibility of roads in a future flooding event. The model consists of deep learning algorithm for forecasting gauge height data and Flood Inundation Mapper (FIM) for spatial flooding. An optimal architecture for Long short-term memory network (LSTM) was identified for the gauge located on Tangipahoa River at Robert, LA. Dropout was applied to the model to evaluate the uncertainty associated with the predictions. The estimates are then used along with FIM to identify the spatial flooding. Further geoprocessing in ArcGIS provides the susceptibility values for different roads. The model was validated based on the devastating flood of August 2016. The paper discusses the challenges for generalization the methodology for other locations and also for various types of flooding. The developed model can be used by the transportation department and other emergency response organizations for effective disaster management.Keywords: deep learning, disaster management, flood prediction, urban flooding
Procedia PDF Downloads 1451369 Energy Efficient Routing Protocol with Ad Hoc On-Demand Distance Vector for MANET
Authors: K. Thamizhmaran, Akshaya Devi Arivazhagan, M. Anitha
Abstract:
On the case of most important systematic issue that must need to be solved in means of implementing a data transmission algorithm on the source of Mobile adhoc networks (MANETs). That is, how to save mobile nodes energy on meeting the requirements of applications or users as the mobile nodes are with battery limited. On while satisfying the energy saving requirement, hence it is also necessary of need to achieve the quality of service. In case of emergency work, it is necessary to deliver the data on mean time. Achieving quality of service in MANETs is also important on while. In order to achieve this requirement, Hence, we further implement the Energy-Aware routing protocol for system of Mobile adhoc networks were it being proposed, that on which saves the energy as on every node by means of efficiently selecting the mode of energy efficient path in the routing process by means of Enhanced AODV routing protocol.Keywords: Ad-Hoc networks, MANET, routing, AODV, EAODV
Procedia PDF Downloads 3691368 Bipolar Impulse Noise Removal and Edge Preservation in Color Images and Video Using Improved Kuwahara Filter
Authors: Reji Thankachan, Varsha PS
Abstract:
Both image capturing devices and human visual systems are nonlinear. Hence nonlinear filtering methods outperforms its linear counterpart in many applications. Linear methods are unable to remove impulsive noise in images by preserving its edges and fine details. In addition, linear algorithms are unable to remove signal dependent or multiplicative noise in images. This paper presents an approach to denoise and smoothen the Bipolar impulse noised images and videos using improved Kuwahara filter. It involves a 2 stage algorithm which includes a noise detection followed by filtering. Numerous simulation demonstrate that proposed method outperforms the existing method by eliminating the painting like flattening effect along the local feature direction while preserving edge with improvement in PSNR and MSE.Keywords: bipolar impulse noise, Kuwahara, PSNR MSE, PDF
Procedia PDF Downloads 4971367 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries
Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni
Abstract:
In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm
Procedia PDF Downloads 1161366 Coarse-Graining in Micromagnetic Simulations of Magnetic Hyperthermia
Authors: Razyeh Behbahani, Martin L. Plumer, Ivan Saika-Voivod
Abstract:
Micromagnetic simulations based on the stochastic Landau-Lifshitz-Gilbert equation are used to calculate dynamic magnetic hysteresis loops relevant to magnetic hyperthermia applications. With the goal to effectively simulate room-temperature loops for large iron-oxide based systems at relatively slow sweep rates on the order of 1 Oe/ns or less, a coarse-graining scheme is proposed and tested. The scheme is derived from a previously developed renormalization-group approach. Loops associated with nanorods, used as building blocks for larger nanoparticles that were employed in preclinical trials (Dennis et al., 2009 Nanotechnology 20 395103), serve as the model test system. The scaling algorithm is shown to produce nearly identical loops over several decades in the model grain sizes. Sweep-rate scaling involving the damping constant alpha is also demonstrated.Keywords: coarse-graining, hyperthermia, hysteresis loops, micromagnetic simulations
Procedia PDF Downloads 1461365 Computational Team Dynamics and Interaction Patterns in New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
New Product Development (NPD) is invariably a team effort and involves effective teamwork. NPD team has members from different disciplines coming together and working through the different phases all the way from conceptual design phase till the production and product roll out. Creativity and Innovation are some of the key factors of successful NPD. Team members going through the different phases of NPD interact and work closely yet challenge each other during the design phases to brainstorm on ideas and later converge to work together. These two traits require the teams to have a divergent and a convergent thinking simultaneously. There needs to be a good balance. The team dynamics invariably result in conflicts among team members. While some amount of conflict (ideational conflict) is desirable in NPD teams to be creative as a group, relational conflicts (or discords among members) could be detrimental to teamwork. Team communication truly reflect these tensions and team dynamics. In this research, team communication (emails) between the members of the NPD teams is considered for analysis. The email communication is processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. The amount of communication (content and not frequency of communication) defines the interaction strength between the members. Social network adjacency matrix is thus obtained for the team. Standard social network analysis techniques based on the Adjacency Matrix (AM) and Dichotomized Adjacency Matrix (DAM) based on network density yield network graphs and network metrics like centrality. The social network graphs are then rendered for visual representation using a Metric Multi-Dimensional Scaling (MMDS) algorithm for node placements and arcs connecting the nodes (representing team members) are drawn. The distance of the nodes in the placement represents the tie-strength between the members. Stronger tie-strengths render nodes closer. Overall visual representation of the social network graph provides a clear picture of the team’s interactions. This research reveals four distinct patterns of team interaction that are clearly identifiable in the visual representation of the social network graph and have a clearly defined computational scheme. The four computational patterns of team interaction defined are Central Member Pattern (CMP), Subgroup and Aloof member Pattern (SAP), Isolate Member Pattern (IMP), and Pendant Member Pattern (PMP). Each of these patterns has a team dynamics implication in terms of the conflict level in the team. For instance, Isolate member pattern, clearly points to a near break-down in communication with the member and hence a possible high conflict level, whereas the subgroup or aloof member pattern points to a non-uniform information flow in the team and some moderate level of conflict. These pattern classifications of teams are then compared and correlated to the real level of conflict in the teams as indicated by the team members through an elaborate self-evaluation, team reflection, feedback form and results show a good correlation.Keywords: team dynamics, team communication, team interactions, social network analysis, sna, new product development, latent semantic analysis, LSA, NPD teams
Procedia PDF Downloads 681364 Ankle Fracture Management: A Unique Cross Departmental Quality Improvement Project
Authors: Langhit Kurar, Loren Charles
Abstract:
Introduction: In light of recent BOAST 12 (August 2016) published guidance on management of ankle fractures, the project aimed to highlight key discrepancies throughout the care trajectory from admission to point of discharge at a district general hospital. Wide breadth of data covering three key domains: accident and emergency, radiology, and orthopaedic surgery were subsequently stratified and recommendations on note documentation, and outpatient follow up were made. Methods: A retrospective twelve month audit was conducted reviewing results of ankle fracture management in 37 patients. Inclusion criterion involved all patients seen at Darent Valley Hospital (DVH) emergency department with radiographic evidence of an ankle fracture. Exclusion criterion involved all patients managed solely by nursing staff or having sustained purely ligamentous injury. Medical notes, including discharge summaries and the PACS online radiographic tool were used for data extraction. Results: Cross-examination of the A & E domain revealed limited awareness of the BOAST 12 recent publication including requirements to document skin integrity and neurovascular assessment. This had direct implications as this would have changed the surgical plan for acutely compromised patients. The majority of results obtained from the radiographic domain were satisfactory with appropriate X-rays taken in over 95% of cases. However, due to time pressures within A & E, patients were often left without a post manipulation XRAY in a backslab. Poorly reduced fractures were subsequently left for a long period resulting in swollen ankles and a time-dependent lag to surgical intervention. This had knocked on implications for prolonged inpatient stay resulting in hospital-acquired co-morbidity including pressure sores. Discussion: The audit has highlighted several areas of improvement throughout the disease trajectory from review in the emergency department to follow up as an outpatient. This has prompted the creation of an algorithm to ensure patients with significant fractures presenting to the emergency department are seen promptly and treatment expedited as per recent guidance. This includes timing for X-rays taken in A & E. Re-audit has shown significant improvement in both documentation at time of presentation and appropriate follow-up strategies. Within the orthopedic domain, we are in the process of creating an ankle fracture pathway to ensure imaging and weight bearing status are made clear to the consulting clinicians in an outpatient setting. Significance/Clinical Relevance: As a result of the ankle fracture algorithm we have adapted the BOAST 12 guidance to shape an intrinsic pathway to not only improve patient management within the emergency department but also create a standardised format for follow up.Keywords: ankle, fracture, BOAST, radiology
Procedia PDF Downloads 1781363 Adaptive Dehazing Using Fusion Strategy
Authors: M. Ramesh Kanthan, S. Naga Nandini Sujatha
Abstract:
The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples.Keywords: single image, fusion, dehazing, multi-scale fusion, per-pixel, weight map
Procedia PDF Downloads 4631362 Recognition of Grocery Products in Images Captured by Cellular Phones
Authors: Farshideh Einsele, Hassan Foroosh
Abstract:
In this paper, we present a robust algorithm to recognize extracted text from grocery product images captured by mobile phone cameras. Recognition of such text is challenging since text in grocery product images varies in its size, orientation, style, illumination, and can suffer from perspective distortion. Pre-processing is performed to make the characters scale and rotation invariant. Since text degradations can not be appropriately defined using wellknown geometric transformations such as translation, rotation, affine transformation and shearing, we use the whole character black pixels as our feature vector. Classification is performed with minimum distance classifier using the maximum likelihood criterion, which delivers very promising Character Recognition Rate (CRR) of 89%. We achieve considerably higher Word Recognition Rate (WRR) of 99% when using lower level linguistic knowledge about product words during the recognition process.Keywords: camera-based OCR, feature extraction, document, image processing, grocery products
Procedia PDF Downloads 4051361 Modeling of Global Solar Radiation on a Horizontal Surface Using Artificial Neural Network: A Case Study
Authors: Laidi Maamar, Hanini Salah
Abstract:
The present work investigates the potential of artificial neural network (ANN) model to predict the horizontal global solar radiation (HGSR). The ANN is developed and optimized using three years meteorological database from 2011 to 2013 available at the meteorological station of Blida (Blida 1 university, Algeria, Latitude 36.5°, Longitude 2.81° and 163 m above mean sea level). Optimal configuration of the ANN model has been determined by minimizing the Root Means Square Error (RMSE) and maximizing the correlation coefficient (R2) between observed and predicted data with the ANN model. To select the best ANN architecture, we have conducted several tests by using different combinations of parameters. A two-layer ANN model with six hidden neurons has been found as an optimal topology with (RMSE=4.036 W/m²) and (R²=0.999). A graphical user interface (GUI), was designed based on the best network structure and training algorithm, to enhance the users’ friendliness application of the model.Keywords: artificial neural network, global solar radiation, solar energy, prediction, Algeria
Procedia PDF Downloads 4981360 Modelling of Multi-Agent Systems for the Scheduling of Multi-EV Charging from Power Limited Sources
Authors: Manan’Iarivo Rasolonjanahary, Chris Bingham, Nigel Schofield, Masoud Bazargan
Abstract:
This paper presents the research and application of model predictive scheduled charging of electric vehicles (EV) subject to limited available power resource. To focus on algorithm and operational characteristics, the EV interface to the source is modelled as a battery state equation during the charging operation. The researched methods allow for the priority scheduling of EV charging in a multi-vehicle regime and when subject to limited source power availability. Priority attribution for each connected EV is described. The validity of the developed methodology is shown through the simulation of different scenarios of charging operation of multiple connected EVs including non-scheduled and scheduled operation with various numbers of vehicles. Performance of the developed algorithms is also reported with the recommendation of the choice of suitable parameters.Keywords: model predictive control, non-scheduled, power limited sources, scheduled and stop-start battery charging
Procedia PDF Downloads 1551359 Lithological Mapping and Iron Deposits Identification in El-Bahariya Depression, Western Desert, Egypt, Using Remote Sensing Data Analysis
Authors: Safaa M. Hassan; Safwat S. Gabr, Mohamed F. Sadek
Abstract:
This study is proposed for the lithological and iron oxides detection in the old mine areas of El-Bahariya Depression, Western Desert, using ASTER and Landsat-8 remote sensing data. Four old iron ore occurrences, namely; El-Gedida, El-Haraa, Ghurabi, and Nasir mine areas found in the El-Bahariya area. This study aims to find new high potential areas for iron mineralization around El-Baharyia depression. Image processing methods such as principle component analysis (PCA) and band ratios (b4/b5, b5/b6, b6/b7, and 4/2, 6/7, band 6) images were used for lithological identification/mapping that includes the iron content in the investigated area. ASTER and Landsat-8 visible and short-wave infrared data found to help mapping the ferruginous sandstones, iron oxides as well as the clay minerals in and around the old mines area of El-Bahariya depression. Landsat-8 band ratio and the principle component of this study showed well distribution of the lithological units, especially ferruginous sandstones and iron zones (hematite and limonite) along with detection of probable high potential areas for iron mineralization which can be used in the future and proved the ability of Landsat-8 and ASTER data in mapping these features. Minimum Noise Fraction (MNF), Mixture Tuned Matched Filtering (MTMF), pixel purity index methods as well as Spectral Ange Mapper classifier algorithm have been successfully discriminated the hematite and limonite content within the iron zones in the study area. Various ASTER image spectra and ASD field spectra of hematite and limonite and the surrounding rocks are compared and found to be consistent in terms of the presence of absorption features at range from 1.95 to 2.3 μm for hematite and limonite. Pixel purity index algorithm and two sub-pixel spectral methods, namely Mixture Tuned Matched Filtering (MTMF) and matched filtering (MF) methods, are applied to ASTER bands to delineate iron oxides (hematite and limonite) rich zones within the rock units. The results are validated in the field by comparing image spectra of spectrally anomalous zone with the USGS resampled laboratory spectra of hematite and limonite samples using ASD measurements. A number of iron oxides rich zones in addition to the main surface exposures of the El-Gadidah Mine, are confirmed in the field. The proposed method is a successful application of spectral mapping of iron oxides deposits in the exposed rock units (i.e., ferruginous sandstone) and present approach of both ASTER and ASD hyperspectral data processing can be used to delineate iron-rich zones occurring within similar geological provinces in any parts of the world.Keywords: Landsat-8, ASTER, lithological mapping, iron exploration, western desert
Procedia PDF Downloads 1421358 Persistent Homology of Convection Cycles in Network Flows
Authors: Minh Quang Le, Dane Taylor
Abstract:
Convection is a well-studied topic in fluid dynamics, yet it is less understood in the context of networks flows. Here, we incorporate techniques from topological data analysis (namely, persistent homology) to automate the detection and characterization of convective/cyclic/chiral flows over networks, particularly those that arise for irreversible Markov chains (MCs). As two applications, we study convection cycles arising under the PageRank algorithm, and we investigate chiral edges flows for a stochastic model of a bi-monomer's configuration dynamics. Our experiments highlight how system parameters---e.g., the teleportation rate for PageRank and the transition rates of external and internal state changes for a monomer---can act as homology regularizers of convection, which we summarize with persistence barcodes and homological bifurcation diagrams. Our approach establishes a new connection between the study of convection cycles and homology, the branch of mathematics that formally studies cycles, which has diverse potential applications throughout the sciences and engineering.Keywords: homology, persistent homolgy, markov chains, convection cycles, filtration
Procedia PDF Downloads 1351357 A Model Based Metaheuristic for Hybrid Hierarchical Community Structure in Social Networks
Authors: Radhia Toujani, Jalel Akaichi
Abstract:
In recent years, the study of community detection in social networks has received great attention. The hierarchical structure of the network leads to the emergence of the convergence to a locally optimal community structure. In this paper, we aim to avoid this local optimum in the introduced hybrid hierarchical method. To achieve this purpose, we present an objective function where we incorporate the value of structural and semantic similarity based modularity and a metaheuristic namely bees colonies algorithm to optimize our objective function on both hierarchical level divisive and agglomerative. In order to assess the efficiency and the accuracy of the introduced hybrid bee colony model, we perform an extensive experimental evaluation on both synthetic and real networks.Keywords: social network, community detection, agglomerative hierarchical clustering, divisive hierarchical clustering, similarity, modularity, metaheuristic, bee colony
Procedia PDF Downloads 3781356 Prediction of Solidification Behavior of Al Alloy in a Cube Mold Cavity
Authors: N. P. Yadav, Deepti Verma
Abstract:
This paper focuses on the mathematical modeling for solidification of Al alloy in a cube mould cavity to study the solidification behavior of casting process. The parametric investigation of solidification process inside the cavity was performed by using computational solidification/melting model coupled with Volume of fluid (VOF) model. The implicit filling algorithm is used in this study to understand the overall process from the filling stage to solidification in a model metal casting process. The model is validated with past studied at same conditions. The solidification process are analyzed by including the effect of pouring velocity and temperature of liquid metal, effect of wall temperature as well natural convection from the wall and geometry of the cavity. These studies show the possibility of various defects during solidification process.Keywords: buoyancy driven flow, natural convection driven flow, residual flow, secondary flow, volume of fluid
Procedia PDF Downloads 4161355 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines
Authors: Alexander Guzman Urbina, Atsushi Aoyama
Abstract:
The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.Keywords: deep learning, risk assessment, neuro fuzzy, pipelines
Procedia PDF Downloads 2911354 Message Passing Neural Network (MPNN) Approach to Multiphase Diffusion in Reservoirs for Well Interconnection Assessments
Authors: Margarita Mayoral-Villa, J. Klapp, L. Di G. Sigalotti, J. E. V. Guzmán
Abstract:
Automated learning techniques are widely applied in the energy sector to address challenging problems from a practical point of view. To this end, we discuss the implementation of a Message Passing algorithm (MPNN)within a Graph Neural Network(GNN)to leverage the neighborhood of a set of nodes during the aggregation process. This approach enables the characterization of multiphase diffusion processes in the reservoir, such that the flow paths underlying the interconnections between multiple wells may be inferred from previously available data on flow rates and bottomhole pressures. The results thus obtained compare favorably with the predictions produced by the Reduced Order Capacitance-Resistance Models (CRM) and suggest the potential of MPNNs to enhance the robustness of the forecasts while improving the computational efficiency.Keywords: multiphase diffusion, message passing neural network, well interconnection, interwell connectivity, graph neural network, capacitance-resistance models
Procedia PDF Downloads 1471353 Location Detection of Vehicular Accident Using Global Navigation Satellite Systems/Inertial Measurement Units Navigator
Authors: Neda Navidi, Rene Jr. Landry
Abstract:
Vehicle tracking and accident recognizing are considered by many industries like insurance and vehicle rental companies. The main goal of this paper is to detect the location of a car accident by combining different methods. The methods, which are considered in this paper, are Global Navigation Satellite Systems/Inertial Measurement Units (GNSS/IMU)-based navigation and vehicle accident detection algorithms. They are expressed by a set of raw measurements, which are obtained from a designed integrator black box using GNSS and inertial sensors. Another concern of this paper is the definition of accident detection algorithm based on its jerk to identify the position of that accident. In fact, the results convinced us that, even in GNSS blockage areas, the position of the accident could be detected by GNSS/INS integration with 50% improvement compared to GNSS stand alone.Keywords: driver behavior monitoring, integration, IMU, GNSS, monitoring, tracking
Procedia PDF Downloads 2321352 Packaging in the Design Synthesis of Novel Aircraft Configuration
Authors: Paul Okonkwo, Howard Smith
Abstract:
A study to estimate the size of the cabin and major aircraft components as well as detect and avoid interference between internally placed components and the external surface, during the conceptual design synthesis and optimisation to explore the design space of a BWB, was conducted. Sizing of components follows the Bradley cabin sizing and rubber engine scaling procedures to size the cabin and engine respectively. The interference detection and avoidance algorithm relies on the ability of the Class Shape Transform parameterisation technique to generate polynomial functions of the surfaces of a BWB aircraft configuration from the sizes of the cabin and internal objects using few variables. Interference detection is essential in packaging of non-conventional configuration like the BWB because of the non-uniform airfoil-shaped sections and resultant varying internal space. The unique configuration increases the need for a methodology to prevent objects from being placed in locations that do not sufficiently enclose them within the geometry.Keywords: packaging, optimisation, BWB, parameterisation, aircraft conceptual design
Procedia PDF Downloads 4621351 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics
Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca
Abstract:
The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.Keywords: adulteration, multivariate analysis, potential functions, regression
Procedia PDF Downloads 1241350 Settlement Prediction for Tehran Subway Line-3 via FLAC3D and ANFIS
Authors: S. A. Naeini, A. Khalili
Abstract:
Nowadays, tunnels with different applications are developed, and most of them are related to subway tunnels. The excavation of shallow tunnels that pass under municipal utilities is very important, and the surface settlement control is an important factor in the design. The study sought to analyze the settlement and also to find an appropriate model in order to predict the behavior of the tunnel in Tehran subway line-3. The displacement in these sections is also determined by using numerical analyses and numerical modeling. In addition, the Adaptive Neuro-Fuzzy Inference System (ANFIS) method is utilized by Hybrid training algorithm. The database pertinent to the optimum network was obtained from 46 subway tunnels in Iran and Turkey which have been constructed by the new Austrian tunneling method (NATM) with similar parameters based on type of their soil. The surface settlement was measured, and the acquired results were compared to the predicted values. The results disclosed that computing intelligence is a good substitute for numerical modeling.Keywords: settlement, Subway Line, FLAC3D, ANFIS Method
Procedia PDF Downloads 2311349 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption
Authors: Waziri Victor Onomza, John K. Alhassan, Idris Ismaila, Noel Dogonyaro Moses
Abstract:
This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute theoretical presentations in high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.Keywords: big data analytics, security, privacy, bootstrapping, homomorphic, homomorphic encryption scheme
Procedia PDF Downloads 3761348 Ensemble-Based SVM Classification Approach for miRNA Prediction
Authors: Sondos M. Hammad, Sherin M. ElGokhy, Mahmoud M. Fahmy, Elsayed A. Sallam
Abstract:
In this paper, an ensemble-based Support Vector Machine (SVM) classification approach is proposed. It is used for miRNA prediction. Three problems, commonly associated with previous approaches, are alleviated. These problems arise due to impose assumptions on the secondary structural of premiRNA, imbalance between the numbers of the laboratory checked miRNAs and the pseudo-hairpins, and finally using a training data set that does not consider all the varieties of samples in different species. We aggregate the predicted outputs of three well-known SVM classifiers; namely, Triplet-SVM, Virgo and Mirident, weighted by their variant features without any structural assumptions. An additional SVM layer is used in aggregating the final output. The proposed approach is trained and then tested with balanced data sets. The results of the proposed approach outperform the three base classifiers. Improved values for the metrics of 88.88% f-score, 92.73% accuracy, 90.64% precision, 96.64% specificity, 87.2% sensitivity, and the area under the ROC curve is 0.91 are achieved.Keywords: MiRNAs, SVM classification, ensemble algorithm, assumption problem, imbalance data
Procedia PDF Downloads 3471347 Arabic Light Stemmer for Better Search Accuracy
Authors: Sahar Khedr, Dina Sayed, Ayman Hanafy
Abstract:
Arabic is one of the most ancient and critical languages in the world. It has over than 250 million Arabic native speakers and more than twenty countries having Arabic as one of its official languages. In the past decade, we have witnessed a rapid evolution in smart devices, social network and technology sector which led to the need to provide tools and libraries that properly tackle the Arabic language in different domains. Stemming is one of the most crucial linguistic fundamentals. It is used in many applications especially in information extraction and text mining fields. The motivation behind this work is to enhance the Arabic light stemmer to serve the data mining industry and leverage it in an open source community. The presented implementation works on enhancing the Arabic light stemmer by utilizing and enhancing an algorithm that provides an extension for a new set of rules and patterns accompanied by adjusted procedure. This study has proven a significant enhancement for better search accuracy with an average 10% improvement in comparison with previous works.Keywords: Arabic data mining, Arabic Information extraction, Arabic Light stemmer, Arabic stemmer
Procedia PDF Downloads 3061346 Artificial Steady-State-Based Nonlinear MPC for Wheeled Mobile Robot
Authors: M. H. Korayem, Sh. Ameri, N. Yousefi Lademakhi
Abstract:
To ensure the stability of closed-loop nonlinear model predictive control (NMPC) within a finite horizon, there is a need for appropriate design terminal ingredients, which can be a time-consuming and challenging effort. Otherwise, in order to ensure the stability of the control system, it is necessary to consider an infinite predictive horizon. Increasing the prediction horizon increases computational demand and slows down the implementation of the method. In this study, a new technique has been proposed to ensure system stability without terminal ingredients. This technique has been employed in the design of the NMPC algorithm, leading to a reduction in the computational complexity of designing terminal ingredients and computational burden. The studied system is a wheeled mobile robot (WMR) subjected to non-holonomic constraints. Simulation has been investigated for two problems: trajectory tracking and adjustment mode.Keywords: wheeled mobile robot, nonlinear model predictive control, stability, without terminal ingredients
Procedia PDF Downloads 891345 Multiscale Modelization of Multilayered Bi-Dimensional Soils
Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur
Abstract:
Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets
Procedia PDF Downloads 1221344 A Survey on Lossless Compression of Bayer Color Filter Array Images
Authors: Alina Trifan, António J. R. Neves
Abstract:
Although most digital cameras acquire images in a raw format, based on a Color Filter Array that arranges RGB color filters on a square grid of photosensors, most image compression techniques do not use the raw data; instead, they use the rgb result of an interpolation algorithm of the raw data. This approach is inefficient and by performing a lossless compression of the raw data, followed by pixel interpolation, digital cameras could be more power efficient and provide images with increased resolution given that the interpolation step could be shifted to an external processing unit. In this paper, we conduct a survey on the use of lossless compression algorithms with raw Bayer images. Moreover, in order to reduce the effect of the transition between colors that increase the entropy of the raw Bayer image, we split the image into three new images corresponding to each channel (red, green and blue) and we study the same compression algorithms applied to each one individually. This simple pre-processing stage allows an improvement of more than 15% in predictive based methods.Keywords: bayer image, CFA, lossless compression, image coding standards
Procedia PDF Downloads 3191343 Determining Earthquake Performances of Existing Reinforced Concrete Buildings by Using ANN
Authors: Musa H. Arslan, Murat Ceylan, Tayfun Koyuncu
Abstract:
In this study, an artificial intelligence-based (ANN based) analytical method has been developed for analyzing earthquake performances of the reinforced concrete (RC) buildings. 66 RC buildings with four to ten storeys were subjected to performance analysis according to the parameters which are the existing material, loading and geometrical characteristics of the buildings. The selected parameters have been thought to be effective on the performance of RC buildings. In the performance analyses stage of the study, level of performance possible to be shown by these buildings in case of an earthquake was determined on the basis of the 4-grade performance levels specified in Turkish Earthquake Code- 2007 (TEC-2007). After obtaining the 4-grade performance level, selected 23 parameters of each building have been matched with the performance level. In this stage, ANN-based fast evaluation algorithm mentioned above made an economic and rapid evaluation of four to ten storey RC buildings. According to the study, the prediction accuracy of ANN has been found about 74%.Keywords: artificial intelligence, earthquake, performance, reinforced concrete
Procedia PDF Downloads 461