Search results for: block matching algorithm
430 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis
Authors: Maher Ali Rusho, Sudipta Halder
Abstract:
The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.
Procedia PDF Downloads 10429 Design and Optimization of a Small Hydraulic Propeller Turbine
Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink
Abstract:
A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design
Procedia PDF Downloads 149428 Multi-Objective Multi-Period Allocation of Temporary Earthquake Disaster Response Facilities with Multi-Commodities
Authors: Abolghasem Yousefi-Babadi, Ali Bozorgi-Amiri, Aida Kazempour, Reza Tavakkoli-Moghaddam, Maryam Irani
Abstract:
All over the world, natural disasters (e.g., earthquakes, floods, volcanoes and hurricanes) causes a lot of deaths. Earthquakes are introduced as catastrophic events, which is accident by unusual phenomena leading to much loss around the world. Such could be replaced by disasters or any other synonyms strongly demand great long-term help and relief, which can be hard to be managed. Supplies and facilities are very important challenges after any earthquake which should be prepared for the disaster regions to satisfy the people's demands who are suffering from earthquake. This paper proposed disaster response facility allocation problem for disaster relief operations as a mathematical programming model. Not only damaged people in the earthquake victims, need the consumable commodities (e.g., food and water), but also they need non-consumable commodities (e.g., clothes) to protect themselves. Therefore, it is concluded that paying attention to disaster points and people's demands are very necessary. To deal with this objective, both commodities including consumable and need non-consumable commodities are considered in the presented model. This paper presented the multi-objective multi-period mathematical programming model regarding the minimizing the average of the weighted response times and minimizing the total operational cost and penalty costs of unmet demand and unused commodities simultaneously. Furthermore, a Chebycheff multi-objective solution procedure as a powerful solution algorithm is applied to solve the proposed model. Finally, to illustrate the model applicability, a case study of the Tehran earthquake is studied, also to show model validation a sensitivity analysis is carried out.Keywords: facility location, multi-objective model, disaster response, commodity
Procedia PDF Downloads 257427 Effect of Thermal Treatment on Mechanical Properties of Reduced Activation Ferritic/Martensitic Eurofer Steel Grade
Authors: Athina Puype, Lorenzo Malerba, Nico De Wispelaere, Roumen Petrov, Jilt Sietsma
Abstract:
Reduced activation ferritic/martensitic (RAFM) steels like EUROFER97 are primary candidate structural materials for first wall application in the future demonstration (DEMO) fusion reactor. Existing steels of this type obtain their functional properties by a two-stage heat treatment, which consists of an annealing stage at 980°C for thirty minutes followed by quenching and an additional tempering stage at 750°C for two hours. This thermal quench and temper (Q&T) treatment creates a microstructure of tempered martensite with, as main precipitates, M23C6 carbides, with M = Fe, Cr and carbonitrides of MX type, e.g. TaC and VN. The resulting microstructure determines the mechanical properties of the steel. The ductility is largely determined by the tempered martensite matrix, while the resistance to mechanical degradation, determined by the spatial and size distribution of precipitates and the martensite crystals, plays a key role in the high temperature properties of the steel. Unfortunately, the high temperature response of EUROFER97 is currently insufficient for long term use in fusion reactors, due to instability of the matrix phase and coarsening of the precipitates at prolonged high temperature exposure. The objective of this study is to induce grain refinement by appropriate modifications of the processing route in order to increase the high temperature strength of a lab-cast EUROFER RAFM steel grade. The goal of the work is to obtain improved mechanical behavior at elevated temperatures with respect to conventionally heat treated EUROFER97. A dilatometric study was conducted to study the effect of the annealing temperature on the mechanical properties after a Q&T treatment. The microstructural features were investigated with scanning electron microscopy (SEM), electron back-scattered diffraction (EBSD) and transmission electron microscopy (TEM). Additionally, hardness measurements, tensile tests at elevated temperatures and Charpy V-notch impact testing of KLST-type MCVN specimens were performed to study the mechanical properties of the furnace-heated lab-cast EUROFER RAFM steel grade. A significant prior austenite grain (PAG) refinement was obtained by lowering the annealing temperature of the conventionally used Q&T treatment for EUROFER97. The reduction of the PAG results in finer martensitic constituents upon quenching, which offers more nucleation sites for carbide and carbonitride formation upon tempering. The ductile-to-brittle transition temperature (DBTT) was found to decrease with decreasing martensitic block size. Additionally, an increased resistance against high temperature degradation was accomplished in the fine grained martensitic materials with smallest precipitates obtained by tailoring the annealing temperature of the Q&T treatment. It is concluded that the microstructural refinement has a pronounced effect on the DBTT without significant loss of strength and ductility. Further investigation into the optimization of the processing route is recommended to improve the mechanical behavior of RAFM steels at elevated temperatures.Keywords: ductile-to-brittle transition temperature (DBTT), EUROFER, reduced activation ferritic/martensitic (RAFM) steels, thermal treatments
Procedia PDF Downloads 299426 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections
Authors: Anthony D. Rhodes, Manan Goel
Abstract:
We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.Keywords: computer vision, object segmentation, interactive segmentation, model compression
Procedia PDF Downloads 119425 A Step Magnitude Haptic Feedback Device and Platform for Better Way to Review Kinesthetic Vibrotactile 3D Design in Professional Training
Authors: Biki Sarmah, Priyanko Raj Mudiar
Abstract:
In the modern world of remotely interactive virtual reality-based learning and teaching, including professional skill-building training and acquisition practices, as well as data acquisition and robotic systems, the revolutionary application or implementation of field-programmable neurostimulator aids and first-hand interactive sensitisation techniques into 3D holographic audio-visual platforms have been a coveted dream of many scholars, professionals, scientists, and students. Integration of 'kinaesthetic vibrotactile haptic perception' along with an actuated step magnitude contact profiloscopy in augmented reality-based learning platforms and professional training can be implemented by using an extremely calculated and well-coordinated image telemetry including remote data mining and control technique. A real-time, computer-aided (PLC-SCADA) field calibration based algorithm must be designed for the purpose. But most importantly, in order to actually realise, as well as to 'interact' with some 3D holographic models displayed over a remote screen using remote laser image telemetry and control, all spatio-physical parameters like cardinal alignment, gyroscopic compensation, as well as surface profile and thermal compositions, must be implemented using zero-order type 1 actuators (or transducers) because they provide zero hystereses, zero backlashes, low deadtime as well as providing a linear, absolutely controllable, intrinsically observable and smooth performance with the least amount of error compensation while ensuring the best ergonomic comfort ever possible for the users.Keywords: haptic feedback, kinaesthetic vibrotactile 3D design, medical simulation training, piezo diaphragm based actuator
Procedia PDF Downloads 164424 Experimental Investigation of Beams Having Spring Mass Resonators
Authors: Somya R. Patro, Arnab Banerjee, G. V. Ramana
Abstract:
A flexural beam carrying elastically mounted concentrated masses, such as engines, motors, oscillators, or vibration absorbers, is often encountered in mechanical, civil, and aeronautical engineering domains. To prevent resonance conditions, the designers must predict the natural frequencies of such a constrained beam system. This paper investigates experimental and analytical studies on vibration suppression in a cantilever beam with a tip mass with the help of spring-mass to achieve local resonance conditions. The system consists of a 3D printed polylactic acid (PLA) beam screwed at the base plate of the shaker system. The top of the free end is connected by an accelerometer which also acts as a tip mass. A spring and a mass are attached at the bottom to replicate the mechanism of the spring-mass resonator. The Fast Fourier Transform (FFT) algorithm converts time acceleration plots into frequency amplitude plots from which transmittance is calculated as a function of the excitation frequency. The mathematical formulation is based on the transfer matrix method, and the governing differential equations are based on Euler Bernoulli's beam theory. The experimental results are successfully validated with the analytical results, providing us essential confidence in our proposed methodology. The beam spring-mass system is then converted to an equivalent two-degree of freedom system, from which frequency response function is obtained. The H2 optimization technique is also used to obtain the closed-form expression of optimum spring stiffness, which shows the influence of spring stiffness on the system's natural frequency and vibration response.Keywords: euler bernoulli beam theory, fast fourier transform, natural frequencies, polylactic acid, transmittance, vibration absorbers
Procedia PDF Downloads 102423 Monitoring of Cannabis Cultivation with High-Resolution Images
Authors: Levent Basayigit, Sinan Demir, Burhan Kara, Yusuf Ucar
Abstract:
Cannabis is mostly used for drug production. In some countries, an excessive amount of illegal cannabis is cultivated and sold. Most of the illegal cannabis cultivation occurs on the lands far from settlements. In farmlands, it is cultivated with other crops. In this method, cannabis is surrounded by tall plants like corn and sunflower. It is also cultivated with tall crops as the mixed culture. The common method of the determination of the illegal cultivation areas is to investigate the information obtained from people. This method is not sufficient for the determination of illegal cultivation in remote areas. For this reason, more effective methods are needed for the determination of illegal cultivation. Remote Sensing is one of the most important technologies to monitor the plant growth on the land. The aim of this study is to monitor cannabis cultivation area using satellite imagery. The main purpose of this study was to develop an applicable method for monitoring the cannabis cultivation. For this purpose, cannabis was grown as single or surrounded by the corn and sunflower in plots. The morphological characteristics of cannabis were recorded two times per month during the vegetation period. The spectral signature library was created with the spectroradiometer. The parcels were monitored with high-resolution satellite imagery. With the processing of satellite imagery, the cultivation areas of cannabis were classified. To separate the Cannabis plots from the other plants, the multiresolution segmentation algorithm was found to be the most successful for classification. WorldView Improved Vegetative Index (WV-VI) classification was the most accurate method for monitoring the plant density. As a result, an object-based classification method and vegetation indices were sufficient for monitoring the cannabis cultivation in multi-temporal Earthwiev images.Keywords: Cannabis, drug, remote sensing, object-based classification
Procedia PDF Downloads 270422 Design and Development of On-Line, On-Site, In-Situ Induction Motor Performance Analyser
Authors: G. S. Ayyappan, Srinivas Kota, Jaffer R. C. Sheriff, C. Prakash Chandra Joshua
Abstract:
In the present scenario of energy crises, energy conservation in the electrical machines is very important in the industries. In order to conserve energy, one needs to monitor the performance of an induction motor on-site and in-situ. The instruments available for this purpose are very meager and very expensive. This paper deals with the design and development of induction motor performance analyser on-line, on-site, and in-situ. The system measures only few electrical input parameters like input voltage, line current, power factor, frequency, powers, and motor shaft speed. These measured data are coupled to name plate details and compute the operating efficiency of induction motor. This system employs the method of computing motor losses with the help of equivalent circuit parameters. The equivalent circuit parameters of the concerned motor are estimated using the developed algorithm at any load conditions and stored in the system memory. The developed instrument is a reliable, accurate, compact, rugged, and cost-effective one. This portable instrument could be used as a handy tool to study the performance of both slip ring and cage induction motors. During the analysis, the data can be stored in SD Memory card and one can perform various analyses like load vs. efficiency, torque vs. speed characteristics, etc. With the help of the developed instrument, one can operate the motor around its Best Operating Point (BOP). Continuous monitoring of the motor efficiency could lead to Life Cycle Assessment (LCA) of motors. LCA helps in taking decisions on motor replacement or retaining or refurbishment.Keywords: energy conservation, equivalent circuit parameters, induction motor efficiency, life cycle assessment, motor performance analysis
Procedia PDF Downloads 379421 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems
Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer
Abstract:
This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.Keywords: cascade control, multi-Loop control systems, multiobjective optimization, optimal control
Procedia PDF Downloads 152420 Modeling and Temperature Control of Water-cooled PEMFC System Using Intelligent Algorithm
Authors: Chen Jun-Hong, He Pu, Tao Wen-Quan
Abstract:
Proton exchange membrane fuel cell (PEMFC) is the most promising future energy source owing to its low operating temperature, high energy efficiency, high power density, and environmental friendliness. In this paper, a comprehensive PEMFC system control-oriented model is developed in the Matlab/Simulink environment, which includes the hydrogen supply subsystem, air supply subsystem, and thermal management subsystem. Besides, Improved Artificial Bee Colony (IABC) is used in the parameter identification of PEMFC semi-empirical equations, making the maximum relative error between simulation data and the experimental data less than 0.4%. Operation temperature is essential for PEMFC, both high and low temperatures are disadvantageous. In the thermal management subsystem, water pump and fan are both controlled with the PID controller to maintain the appreciate operation temperature of PEMFC for the requirements of safe and efficient operation. To improve the control effect further, fuzzy control is introduced to optimize the PID controller of the pump, and the Radial Basis Function (RBF) neural network is introduced to optimize the PID controller of the fan. The results demonstrate that Fuzzy-PID and RBF-PID can achieve a better control effect with 22.66% decrease in Integral Absolute Error Criterion (IAE) of T_st (Temperature of PEMFC) and 77.56% decrease in IAE of T_in (Temperature of inlet cooling water) compared with traditional PID. In the end, a novel thermal management structure is proposed, which uses the cooling air passing through the main radiator to continue cooling the secondary radiator. In this thermal management structure, the parasitic power dissipation can be reduced by 69.94%, and the control effect can be improved with a 52.88% decrease in IAE of T_in under the same controller.Keywords: PEMFC system, parameter identification, temperature control, Fuzzy-PID, RBF-PID, parasitic power
Procedia PDF Downloads 83419 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 103418 Rating Agreement: Machine Learning for Environmental, Social, and Governance Disclosure
Authors: Nico Rosamilia
Abstract:
The study evaluates the importance of non-financial disclosure practices for regulators, investors, businesses, and markets. It aims to create a sector-specific set of indicators for environmental, social, and governance (ESG) performances alternative to the ratings of the agencies. The existing literature extensively studies the implementation of ESG rating systems. Conversely, this study has a twofold outcome. Firstly, it should generalize incentive systems and governance policies for ESG and sustainable principles. Therefore, it should contribute to the EU Sustainable Finance Disclosure Regulation. Secondly, it concerns the market and the investors by highlighting successful sustainable investing. Indeed, the study contemplates the effect of ESG adoption practices on corporate value. The research explores the asset pricing angle in order to shed light on the fragmented argument on the finance of ESG. Investors may be misguided about the positive or negative effects of ESG on performances. The paper proposes a different method to evaluate ESG performances. By comparing the results of a traditional econometric approach (Lasso) with a machine learning algorithm (Random Forest), the study establishes a set of indicators for ESG performance. Therefore, the research also empirically contributes to the theoretical strands of literature regarding model selection and variable importance in a finance framework. The algorithms will spit out sector-specific indicators. This set of indicators defines an alternative to the compounded scores of ESG rating agencies and avoids the possible offsetting effect of scores. With this approach, the paper defines a sector-specific set of indicators to standardize ESG disclosure. Additionally, it tries to shed light on the absence of a clear understanding of the direction of the ESG effect on corporate value (the problem of endogeneity).Keywords: ESG ratings, non-financial information, value of firms, sustainable finance
Procedia PDF Downloads 82417 Using Time Series NDVI to Model Land Cover Change: A Case Study in the Berg River Catchment Area, Western Cape, South Africa
Authors: Adesuyi Ayodeji Steve, Zahn Munch
Abstract:
This study investigates the use of MODIS NDVI to identify agricultural land cover change areas on an annual time step (2007 - 2012) and characterize the trend in the study area. An ISODATA classification was performed on the MODIS imagery to select only the agricultural class producing 3 class groups namely: agriculture, agriculture/semi-natural, and semi-natural. NDVI signatures were created for the time series to identify areas dominated by cereals and vineyards with the aid of ancillary, pictometry and field sample data. The NDVI signature curve and training samples aided in creating a decision tree model in WEKA 3.6.9. From the training samples two classification models were built in WEKA using decision tree classifier (J48) algorithm; Model 1 included ISODATA classification and Model 2 without, both having accuracies of 90.7% and 88.3% respectively. The two models were used to classify the whole study area, thus producing two land cover maps with Model 1 and 2 having classification accuracies of 77% and 80% respectively. Model 2 was used to create change detection maps for all the other years. Subtle changes and areas of consistency (unchanged) were observed in the agricultural classes and crop practices over the years as predicted by the land cover classification. 41% of the catchment comprises of cereals with 35% possibly following a crop rotation system. Vineyard largely remained constant over the years, with some conversion to vineyard (1%) from other land cover classes. Some of the changes might be as a result of misclassification and crop rotation system.Keywords: change detection, land cover, modis, NDVI
Procedia PDF Downloads 400416 Expert System: Debugging Using MD5 Process Firewall
Authors: C. U. Om Kumar, S. Kishore, A. Geetha
Abstract:
An Operating system (OS) is software that manages computer hardware and software resources by providing services to computer programs. One of the important user expectations of the operating system is to provide the practice of defending information from unauthorized access, disclosure, modification, inspection, recording or destruction. Operating system is always vulnerable to the attacks of malwares such as computer virus, worm, Trojan horse, backdoors, ransomware, spyware, adware, scareware and more. And so the anti-virus software were created for ensuring security against the prominent computer viruses by applying a dictionary based approach. The anti-virus programs are not always guaranteed to provide security against the new viruses proliferating every day. To clarify this issue and to secure the computer system, our proposed expert system concentrates on authorizing the processes as wanted and unwanted by the administrator for execution. The Expert system maintains a database which consists of hash code of the processes which are to be allowed. These hash codes are generated using MD5 message-digest algorithm which is a widely used cryptographic hash function. The administrator approves the wanted processes that are to be executed in the client in a Local Area Network by implementing Client-Server architecture and only the processes that match with the processes in the database table will be executed by which many malicious processes are restricted from infecting the operating system. The add-on advantage of this proposed Expert system is that it limits CPU usage and minimizes resource utilization. Thus data and information security is ensured by our system along with increased performance of the operating system.Keywords: virus, worm, Trojan horse, back doors, Ransomware, Spyware, Adware, Scareware, sticky software, process table, MD5, CPU usage and resource utilization
Procedia PDF Downloads 427415 Triangular Hesitant Fuzzy TOPSIS Approach in Investment Projects Management
Authors: Irina Khutsishvili
Abstract:
The presented study develops a decision support methodology for multi-criteria group decision-making problem. The proposed methodology is based on the TOPSIS (Technique for Order Performance by Similarity to Ideal Solution) approach in the hesitant fuzzy environment. The main idea of decision-making problem is a selection of one best alternative or several ranking alternatives among a set of feasible alternatives. Typically, the process of decision-making is based on an evaluation of certain criteria. In many MCDM problems (such as medical diagnosis, project management, business and financial management, etc.), the process of decision-making involves experts' assessments. These assessments frequently are expressed in fuzzy numbers, confidence intervals, intuitionistic fuzzy values, hesitant fuzzy elements and so on. However, a more realistic approach is using linguistic expert assessments (linguistic variables). In the proposed methodology both the values and weights of the criteria take the form of linguistic variables, given by all decision makers. Then, these assessments are expressed in triangular fuzzy numbers. Consequently, proposed approach is based on triangular hesitant fuzzy TOPSIS decision-making model. Following the TOPSIS algorithm, first, the fuzzy positive ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS) are defined. Then the ranking of alternatives is performed in accordance with the proximity of their distances to the both FPIS and FNIS. Based on proposed approach the software package has been developed, which was used to rank investment projects in the real investment decision-making problem. The application and testing of the software were carried out based on the data provided by the ‘Bank of Georgia’.Keywords: fuzzy TOPSIS approach, investment project, linguistic variable, multi-criteria decision making, triangular hesitant fuzzy set
Procedia PDF Downloads 427414 Electrical Machine Winding Temperature Estimation Using Stateful Long Short-Term Memory Networks (LSTM) and Truncated Backpropagation Through Time (TBPTT)
Authors: Yujiang Wu
Abstract:
As electrical machine (e-machine) power density re-querulents become more stringent in vehicle electrification, mounting a temperature sensor for e-machine stator windings becomes increasingly difficult. This can lead to higher manufacturing costs, complicated harnesses, and reduced reliability. In this paper, we propose a deep-learning method for predicting electric machine winding temperature, which can either replace the sensor entirely or serve as a backup to the existing sensor. We compare the performance of our method, the stateful long short-term memory networks (LSTM) with truncated backpropagation through time (TBTT), with that of linear regression, as well as stateless LSTM with/without residual connection. Our results demonstrate the strength of combining stateful LSTM and TBTT in tackling nonlinear time series prediction problems with long sequence lengths. Additionally, in industrial applications, high-temperature region prediction accuracy is more important because winding temperature sensing is typically used for derating machine power when the temperature is high. To evaluate the performance of our algorithm, we developed a temperature-stratified MSE. We propose a simple but effective data preprocessing trick to improve the high-temperature region prediction accuracy. Our experimental results demonstrate the effectiveness of our proposed method in accurately predicting winding temperature, particularly in high-temperature regions, while also reducing manufacturing costs and improving reliability.Keywords: deep learning, electrical machine, functional safety, long short-term memory networks (LSTM), thermal management, time series prediction
Procedia PDF Downloads 98413 The Inherent Flaw in the NBA Playoff Structure
Authors: Larry Turkish
Abstract:
Introduction: The NBA is an example of mediocrity and this will be evident in the following paper. The study examines and evaluates the characteristics of the NBA champions. As divisions and playoff teams increase, there is an increase in the probability that the champion originates from the mediocre category. Since it’s inception in 1947, the league has been mediocre and continues to this day. Why does a professional league allow any team with a less than 50% winning percentage into the playoffs? As long as the finances flow into the league, owners will not change the current algorithm. The objective of this paper is to determine if the regular season has meaning in finding an NBA champion. Statistical Analysis: The data originates from the NBA website. The following variables are part of the statistical analysis: Rank, the rank of a team relative to other teams in the league based on the regular season win-loss record; Winning Percentage of a team based on the regular season; Divisions, the number of divisions within the league and Playoff Teams, the number of playoff teams relative to a particular season. The following statistical applications are applied to the data: Pearson Product-Moment Correlation, Analysis of Variance, Factor and Regression analysis. Conclusion: The results indicate that the divisional structure and number of playoff teams results in a negative effect on the winning percentage of playoff teams. It also prevents teams with higher winning percentages from accessing the playoffs. Recommendations: 1. Teams that have a winning percentage greater than 1 standard deviation from the mean from the regular season will have access to playoffs. (Eliminates mediocre teams.) 2. Eliminate Divisions (Eliminates weaker teams from access to playoffs.) 3. Eliminate Conferences (Eliminates weaker teams from access to the playoffs.) 4. Have a balanced regular season schedule, (Reduces the number of regular season games, creates equilibrium, reduces bias) that will reduce the need for load management.Keywords: alignment, mediocrity, regression, z-score
Procedia PDF Downloads 129412 Quality Analysis of Vegetables Through Image Processing
Authors: Abdul Khalique Baloch, Ali Okatan
Abstract:
The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria
Procedia PDF Downloads 68411 Development of a Regression Based Model to Predict Subjective Perception of Squeak and Rattle Noise
Authors: Ramkumar R., Gaurav Shinde, Pratik Shroff, Sachin Kumar Jain, Nagesh Walke
Abstract:
Advancements in electric vehicles have significantly reduced the powertrain noise and moving components of vehicles. As a result, in-cab noises have become more noticeable to passengers inside the car. To ensure a comfortable ride for drivers and other passengers, it has become crucial to eliminate undesirable component noises during the development phase. Standard practices are followed to identify the severity of noises based on subjective ratings, but it can be a tedious process to identify the severity of each development sample and make changes to reduce it. Additionally, the severity rating can vary from jury to jury, making it challenging to arrive at a definitive conclusion. To address this, an automotive component was identified to evaluate squeak and rattle noise issue. Physical tests were carried out for random and sine excitation profiles. Aim was to subjectively assess the noise using jury rating method and objectively evaluate the same by measuring the noise. Suitable jury evaluation method was selected for the said activity, and recorded sounds were replayed for jury rating. Objective data sound quality metrics viz., loudness, sharpness, roughness, fluctuation strength and overall Sound Pressure Level (SPL) were measured. Based on this, correlation co-efficients was established to identify the most relevant sound quality metrics that are contributing to particular identified noise issue. Regression analysis was then performed to establish the correlation between subjective and objective data. Mathematical model was prepared using artificial intelligence and machine learning algorithm. The developed model was able to predict the subjective rating with good accuracy.Keywords: BSR, noise, correlation, regression
Procedia PDF Downloads 78410 Direct Approach in Modeling Particle Breakage Using Discrete Element Method
Authors: Ebrahim Ghasemi Ardi, Ai Bing Yu, Run Yu Yang
Abstract:
Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.Keywords: particle bed, breakage models, breakage kinetic, discrete element method
Procedia PDF Downloads 197409 Land Use Dynamics of Ikere Forest Reserve, Nigeria Using Geographic Information System
Authors: Akintunde Alo
Abstract:
The incessant encroachments into the forest ecosystem by the farmers and local contractors constitute a major threat to the conservation of genetic resources and biodiversity in Nigeria. To propose a viable monitoring system, this study employed Geographic Information System (GIS) technology to assess the changes that occurred for a period of five years (between 2011 and 2016) in Ikere forest reserve. Landsat imagery of the forest reserve was obtained. For the purpose of geo-referencing the acquired satellite imagery, ground-truth coordinates of some benchmark places within the forest reserve was relied on. Supervised classification algorithm, image processing, vectorization and map production were realized using ArcGIS. Various land use systems within the forest ecosystem were digitized into polygons of different types and colours for 2011 and 2016, roads were represented with lines of different thickness and colours. Of the six land-use delineated, the grassland increased from 26.50 % in 2011 to 45.53% in 2016 of the total land area with a percentage change of 71.81 %. Plantations of Gmelina arborea and Tectona grandis on the other hand reduced from 62.16 % in 2011 to 27.41% in 2016. The farmland and degraded land recorded percentage change of about 176.80 % and 8.70 % respectively from 2011 to 2016. Overall, the rate of deforestation in the study area is on the increase and becoming severe. About 72.59% of the total land area has been converted to non-forestry uses while the remnant 27.41% is occupied by plantations of Gmelina arborea and Tectona grandis. Interestingly, over 55 % of the plantation area in 2011 has changed to grassland, or converted to farmland and degraded land in 2016. The rate of change over time was about 9.79 % annually. Based on the results, rapid actions to prevail on the encroachers to stop deforestation and encouraged re-afforestation in the study area are recommended.Keywords: land use change, forest reserve, satellite imagery, geographical information system
Procedia PDF Downloads 355408 Comparati̇ve Study of Pi̇xel and Object-Based Image Classificati̇on Techni̇ques for Extracti̇on of Land Use/Land Cover Informati̇on
Authors: Mahesh Kumar Jat, Manisha Choudhary
Abstract:
Rapid population and economic growth resulted in changes in large-scale land use land cover (LULC) changes. Changes in the biophysical properties of the Earth's surface and its impact on climate are of primary concern nowadays. Different approaches, ranging from location-based relationships or modelling earth surface - atmospheric interaction through modelling techniques like surface energy balance (SEB) have been used in the recent past to examine the relationship between changes in Earth surface land cover and climatic characteristics like temperature and precipitation. A remote sensing-based model i.e., Surface Energy Balance Algorithm for Land (SEBAL), has been used to estimate the surface heat fluxes over Mahi Bajaj Sagar catchment (India) from 2001 to 2020. Landsat ETM and OLI satellite data are used to model the SEB of the area. Changes in observed precipitation and temperature, obtained from India Meteorological Department (IMD) have been correlated with changes in surface heat fluxes to understand the relative contributions of LULC change in changing these climatic variables. Results indicate a noticeable impact of LULC changes on climatic variables, which are aligned with respective changes in SEB components. Results suggest that precipitation increases at a rate of 20 mm/year. The maximum and minimum temperature decreases and increases at 0.007 ℃ /year and 0.02 ℃ /year, respectively. The average temperature increases at 0.009 ℃ /year. Changes in latent heat flux and sensible heat flux positively correlate with precipitation and temperature, respectively. Variation in surface heat fluxes influences the climate parameters and is an adequate reason for climate change. So, SEB modelling is helpful to understand the LULC change and its impact on climate.Keywords: remote sensing, GIS, object based, classification
Procedia PDF Downloads 128407 Day Ahead and Intraday Electricity Demand Forecasting in Himachal Region using Machine Learning
Authors: Milan Joshi, Harsh Agrawal, Pallaw Mishra, Sanand Sule
Abstract:
Predicting electricity usage is a crucial aspect of organizing and controlling sustainable energy systems. The task of forecasting electricity load is intricate and requires a lot of effort due to the combined impact of social, economic, technical, environmental, and cultural factors on power consumption in communities. As a result, it is important to create strong models that can handle the significant non-linear and complex nature of the task. The objective of this study is to create and compare three machine learning techniques for predicting electricity load for both the day ahead and intraday, taking into account various factors such as meteorological data and social events including holidays and festivals. The proposed methods include a LightGBM, FBProphet, combination of FBProphet and LightGBM for day ahead and Motifs( Stumpy) based on Mueens algorithm for similarity search for intraday. We utilize these techniques to predict electricity usage during normal days and social events in the Himachal Region. We then assess their performance by measuring the MSE, RMSE, and MAPE values. The outcomes demonstrate that the combination of FBProphet and LightGBM method is the most accurate for day ahead and Motifs for intraday forecasting of electricity usage, surpassing other models in terms of MAPE, RMSE, and MSE. Moreover, the FBProphet - LightGBM approach proves to be highly effective in forecasting electricity load during social events, exhibiting precise day ahead predictions. In summary, our proposed electricity forecasting techniques display excellent performance in predicting electricity usage during normal days and special events in the Himachal Region.Keywords: feature engineering, FBProphet, LightGBM, MASS, Motifs, MAPE
Procedia PDF Downloads 70406 Monitoring the Change of Padma River Bank at Faridpur, Bangladesh Using Remote Sensing Approach
Authors: Ilme Faridatul, Bo Wu
Abstract:
Bangladesh is often called as a motherland of rivers. It contains about 700 rivers among all these the Padma River is one of the largest rivers of Bangladesh. The change of river bank and erosion has become a common environmental natural hazard in Bangladesh. The river banks are under intense pressure from natural processes such as erosion and accretion as well as anthropogenic processes such as urban growth and pollution. The Padma River is flowing along ten districts of Bangladesh among all these Faridpur district is most vulnerable to river bank erosion. The severity of the river erosion is so high that each year a thousand of populations become homeless and lose their agricultural lands. Though the Faridpur district is most vulnerable to river bank erosion no specific research has been conducted to identify the changing pattern of river bank along this district. The outcome of the research may serve as guidance to prepare river bank monitoring program and management. This research has utilized integrated techniques of remote sensing and geographic information system to monitor the changes from 1995 to 2015 at Faridpur district. To discriminate the land water interface Modified Normalized Difference Water Index (MNDWI) algorithm is applied and on screen digitization approach is used over MNDWI images of 1995, 2002 and 2015 for river bank line extraction. The extent of changes in the river bank along Faridpur district is estimated through overlaying the digitized maps of all three years. The river bank lines are highlighted to infer the erosion and accretion and the changes are calculated. The result shows that the middle of the river is gaining land through sedimentation and the both side river bank is shifting causing severe erosion that consequently resulting the loss of farmland and homestead. Over the study period from 1995 to 2015 it witnessed huge erosion and accretion that played an active role in the changes of the river bank.Keywords: river bank, erosion and accretion, change monitoring, remote sensing
Procedia PDF Downloads 323405 Implementation of Research Papers and Industry Related Experiments by Undergraduate Students in the Field of Automation
Authors: Veena N. Hegde, S. R. Desai
Abstract:
Motivating a heterogeneous group of students towards engagement in research related activities is a challenging task in engineering education. An effort is being made at the Department of Electronics and Instrumentation Engineering, where two courses are taken up on a pilot basis to kindle research interests in students at the undergraduate level. The courses, namely algorithm and system design (ASD) and automation in process control (APC), are selected for experimentation purposes. The task is being accomplished by providing scope for implementation of research papers and proposing solutions for the current industrial problems by the student teams. The course instructors have proposed an alternative assessment tool to evaluate the undergraduate students that involve activities beyond the curriculum. The method was tested for the aforementioned two courses in a particular academic year, and as per the observations, there is a considerable improvement in the number of student engagement towards research in the subsequent years of their undergraduate course. The student groups from the third-year engineering were made to read, implement the research papers, and they were also instructed to develop simulation modules for certain processes aiming towards automation. The target audience being students, were common for both the courses and the students' strength was 30. Around 50% of successful students were given the continued tasks in the subsequent two semesters, and out of 15 students who continued from sixth semesters were able to follow the research methodology well in the seventh and eighth semesters. Further, around 30% of the students out of 15 ended up carrying out project work with a research component involved and were successful in producing four conference papers. The methodology adopted is justified using a sample data set, and the outcomes are highlighted. The quantitative and qualitative results obtained through this study prove that such practices will enhance learning experiences substantially at the undergraduate level.Keywords: industrial problems, learning experiences, research related activities, student engagement
Procedia PDF Downloads 163404 Pneumoperitoneum Creation Assisted with Optical Coherence Tomography and Automatic Identification
Authors: Eric Yi-Hsiu Huang, Meng-Chun Kao, Wen-Chuan Kuo
Abstract:
For every laparoscopic surgery, a safe pneumoperitoneumcreation (gaining access to the peritoneal cavity) is the first and essential step. However, closed pneumoperitoneum is usually obtained by blind insertion of a Veress needle into the peritoneal cavity, which may carry potential risks suchas bowel and vascular injury.Until now, there remains no definite measure to visually confirm the position of the needle tip inside the peritoneal cavity. Therefore, this study established an image-guided Veress needle method by combining a fiber probe with optical coherence tomography (OCT). An algorithm was also proposed for determining the exact location of the needle tip through the acquisition of OCT images. Our method not only generates a series of “live” two-dimensional (2D) images during the needle puncture toward the peritoneal cavity but also can eliminate operator variation in image judgment, thus improving peritoneal access safety. This study was approved by the Ethics Committee of Taipei Veterans General Hospital (Taipei VGH IACUC 2020-144). A total of 2400 in vivo OCT images, independent of each other, were acquired from experiments of forty peritoneal punctures on two piglets. Characteristic OCT image patterns could be observed during the puncturing process. The ROC curve demonstrates the discrimination capability of these quantitative image features of the classifier, showing the accuracy of the classifier for determining the inside vs. outside of the peritoneal was 98% (AUC=0.98). In summary, the present study demonstrates the ability of the combination of our proposed automatic identification method and OCT imaging for automatically and objectively identifying the location of the needle tip. OCT images translate the blind closed technique of peritoneal access into a visualized procedure, thus improving peritoneal access safety.Keywords: pneumoperitoneum, optical coherence tomography, automatic identification, veress needle
Procedia PDF Downloads 133403 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique
Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki
Abstract:
Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector
Procedia PDF Downloads 334402 Phytochemical Analysis and in vitro Biological Activities of an Ethyl Acetate Extract from the Peel of Punica granatum L. var. Dente di Cavallo
Authors: Silvia Di Giacomo, Marcello Locatelli, Simone Carradori, Francesco Cacciagrano, Chiara Toniolo, Gabriela Mazzanti, Luisa Mannina, Stefania Cesa, Antonella Di Sotto
Abstract:
Hyperglycemia represents the main pathogenic factor in the development of diabetes complications and has been found associated with mitochondrial dysfunction and oxidative stress, which in turn increase cell dysfunction. Therefore, counteract oxidative species appears to be a suitable strategy for preventing the hyperglycemia-induce cell damage and support the pharmacotherapy of diabetes and metabolic diseases. Antidiabetic potential of many food sources has been linked to the presence of polyphenolic metabolites, particularly flavonoids such as quercetin and its glycosylated form rutin. In line with this evidence, in the present study, we assayed the potential anti-hyperglycemic activity of an ethyl acetate extract from the peel of Punica granatum L. var. Dente di Cavallo (PGE), a fruit well known to traditional medicine for the beneficial properties of its edible juice. The effect of the extract on the glucidic metabolism has been evaluated by assessing its ability to inhibit α-amylase and α-glucosidase, two digestive enzymes responsible for the hydrolysis of dietary carbohydrates: their inhibition can delay the carbohydrate digestion and reduce glucose absorption, thus representing an important strategy for the management of hyperglycemia. Also, the PGE ability to block the release of advanced glycated end-products (AGEs), whose accumulation is known to be responsible for diabetic vascular complications, was studied. The iron-reducing and chelating activities, which are the primary mechanisms by which AGE inhibitors stop their metal-catalyzed formation, were evaluated as possible antioxidant mechanisms. At last, the phenolic content of PGE was characterized by chromatographic and spectrophotometric methods. Our results displayed the ability of PGE to inhibit α-amylase enzyme with a similar potency to the positive control: the IC₅₀ values were 52.2 (CL 27.7 - 101.2) µg/ml and 35.6 (CL 22.8 - 55.5) µg/ml for acarbose and PGE, respectively. PGE also inhibited the α-glucosidase enzyme with about a 25 higher potency than the positive controls of acarbose and quercetin. Furthermore, the extract exhibited ferrous and ferric ion chelating ability, with a maximum effect of 82.1% and 80.6% at a concentration of 250 µg/ml respectively, and reducing properties, reaching the maximum effect of 80.5% at a concentration of 10 µg/ml. At last, PGE was found able to inhibit the AGE production (maximum inhibition of 82.2% at the concentration of 1000 µg/ml), although with lower potency with respect to the positive control rutin. The phytochemical analysis of PGE displayed the presence of high levels of total polyphenols, tannins, and flavonoids, among which ellagic acid, gallic acid and catechin were identified. Altogether these data highlight the ability of PGE to control the carbohydrate metabolism at different levels, both by inhibiting the metabolic enzymes and by affecting the AGE formation likely by chelating mechanisms. It is also noteworthy that peel from pomegranate, although being a waste of juice production, can be reviewed as a nutraceutical source. In conclusion, present results suggest the possible role of PGE as a remedy for preventing hyperglycemia complications and encourage further in vivo studies.Keywords: anti-hyperglycemic activity, antioxidant properties, nutraceuticals, polyphenols, pomegranate
Procedia PDF Downloads 184401 Data Mining Spatial: Unsupervised Classification of Geographic Data
Authors: Chahrazed Zouaoui
Abstract:
In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.Keywords: mining, GIS, geo-clustering, neighborhood
Procedia PDF Downloads 374