Search results for: dynamic algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7055

Search results for: dynamic algorithm

1295 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem

Authors: Bidzina Matsaberidze

Abstract:

It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.

Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions

Procedia PDF Downloads 75
1294 Identification of Groundwater Potential Zones Using Geographic Information System and Multi-Criteria Decision Analysis: A Case Study in Bagmati River Basin

Authors: Hritik Bhattarai, Vivek Dumre, Ananya Neupane, Poonam Koirala, Anjali Singh

Abstract:

The availability of clean and reliable groundwater is essential for the sustainment of human and environmental health. Groundwater is a crucial resource that contributes significantly to the total annual supply. However, over-exploitation has depleted groundwater availability considerably and led to some land subsidence. Determining the potential zone of groundwater is vital for protecting water quality and managing groundwater systems. Groundwater potential zones are marked with the assistance of Geographic Information System techniques. During the study, a standard methodology was proposed to determine groundwater potential using an integration of GIS and AHP techniques. When choosing the prospective groundwater zone, accurate information was generated to get parameters such as geology, slope, soil, temperature, rainfall, drainage density, and lineament density. However, identifying and mapping potential groundwater zones remains challenging due to aquifer systems' complex and dynamic nature. Then, ArcGIS was incorporated with a weighted overlay, and appropriate ranks were assigned to each parameter group. Through data analysis, MCDA was applied to weigh and prioritize the different parameters based on their relative impact on groundwater potential. There were three probable groundwater zones: low potential, moderate potential, and high potential. Our analysis showed that the central and lower parts of the Bagmati River Basin have the highest potential, i.e., 7.20% of the total area. In contrast, the northern and eastern parts have lower potential. The identified potential zones can be used to guide future groundwater exploration and management strategies in the region.

Keywords: groundwater, geographic information system, analytic hierarchy processes, multi-criteria decision analysis, Bagmati

Procedia PDF Downloads 86
1293 Comparison Study of Capital Protection Risk Management Strategies: Constant Proportion Portfolio Insurance versus Volatility Target Based Investment Strategy with a Guarantee

Authors: Olga Biedova, Victoria Steblovskaya, Kai Wallbaum

Abstract:

In the current capital market environment, investors constantly face the challenge of finding a successful and stable investment mechanism. Highly volatile equity markets and extremely low bond returns bring about the demand for sophisticated yet reliable risk management strategies. Investors are looking for risk management solutions to efficiently protect their investments. This study compares a classic Constant Proportion Portfolio Insurance (CPPI) strategy to a Volatility Target portfolio insurance (VTPI). VTPI is an extension of the well-known Option Based Portfolio Insurance (OBPI) to the case where an embedded option is linked not to a pure risky asset such as e.g., S&P 500, but to a Volatility Target (VolTarget) portfolio. VolTarget strategy is a recently emerged rule-based dynamic asset allocation mechanism where the portfolio’s volatility is kept under control. As a result, a typical VTPI strategy allows higher participation rates in the market due to reduced embedded option prices. In addition, controlled volatility levels eliminate the volatility spread in option pricing, one of the frequently cited reasons for OBPI strategy fall behind CPPI. The strategies are compared within the framework of the stochastic dominance theory based on numerical simulations, rather than on the restrictive assumption of the Black-Scholes type dynamics of the underlying asset. An extended comparative quantitative analysis of performances of the above investment strategies in various market scenarios and within a range of input parameter values is presented.

Keywords: CPPI, portfolio insurance, stochastic dominance, volatility target

Procedia PDF Downloads 149
1292 Ontology-Driven Knowledge Discovery and Validation from Admission Databases: A Structural Causal Model Approach for Polytechnic Education in Nigeria

Authors: Bernard Igoche Igoche, Olumuyiwa Matthew, Peter Bednar, Alexander Gegov

Abstract:

This study presents an ontology-driven approach for knowledge discovery and validation from admission databases in Nigerian polytechnic institutions. The research aims to address the challenges of extracting meaningful insights from vast amounts of admission data and utilizing them for decision-making and process improvement. The proposed methodology combines the knowledge discovery in databases (KDD) process with a structural causal model (SCM) ontological framework. The admission database of Benue State Polytechnic Ugbokolo (Benpoly) is used as a case study. The KDD process is employed to mine and distill knowledge from the database, while the SCM ontology is designed to identify and validate the important features of the admission process. The SCM validation is performed using the conditional independence test (CIT) criteria, and an algorithm is developed to implement the validation process. The identified features are then used for machine learning (ML) modeling and prediction of admission status. The results demonstrate the adequacy of the SCM ontological framework in representing the admission process and the high predictive accuracies achieved by the ML models, with k-nearest neighbors (KNN) and support vector machine (SVM) achieving 92% accuracy. The study concludes that the proposed ontology-driven approach contributes to the advancement of educational data mining and provides a foundation for future research in this domain.

Keywords: admission databases, educational data mining, machine learning, ontology-driven knowledge discovery, polytechnic education, structural causal model

Procedia PDF Downloads 36
1291 Barnard Feature Point Detector for Low-Contractperiapical Radiography Image

Authors: Chih-Yi Ho, Tzu-Fang Chang, Chih-Chia Huang, Chia-Yen Lee

Abstract:

In dental clinics, the dentists use the periapical radiography image to assess the effectiveness of endodontic treatment of teeth with chronic apical periodontitis. Periapical radiography images are taken at different times to assess alveolar bone variation before and after the root canal treatment, and furthermore to judge whether the treatment was successful. Current clinical assessment of apical tissue recovery relies only on dentist personal experience. It is difficult to have the same standard and objective interpretations due to the dentist or radiologist personal background and knowledge. If periapical radiography images at the different time could be registered well, the endodontic treatment could be evaluated. In the image registration area, it is necessary to assign representative control points to the transformation model for good performances of registration results. However, detection of representative control points (feature points) on periapical radiography images is generally very difficult. Regardless of which traditional detection methods are practiced, sufficient feature points may not be detected due to the low-contrast characteristics of the x-ray image. Barnard detector is an algorithm for feature point detection based on grayscale value gradients, which can obtain sufficient feature points in the case of gray-scale contrast is not obvious. However, the Barnard detector would detect too many feature points, and they would be too clustered. This study uses the local extrema of clustering feature points and the suppression radius to overcome the problem, and compared different feature point detection methods. In the preliminary result, the feature points could be detected as representative control points by the proposed method.

Keywords: feature detection, Barnard detector, registration, periapical radiography image, endodontic treatment

Procedia PDF Downloads 432
1290 An Amended Method for Assessment of Hypertrophic Scars Viscoelastic Parameters

Authors: Iveta Bryjova

Abstract:

Recording of viscoelastic strain-vs-time curves with the aid of the suction method and a follow-up analysis, resulting into evaluation of standard viscoelastic parameters, is a significant technique for non-invasive contact diagnostics of mechanical properties of skin and assessment of its conditions, particularly in acute burns, hypertrophic scarring (the most common complication of burn trauma) and reconstructive surgery. For elimination of the skin thickness contribution, usable viscoelastic parameters deduced from the strain-vs-time curves are restricted to the relative ones (i.e. those expressed as a ratio of two dimensional parameters), like grosselasticity, net-elasticity, biological elasticity or Qu’s area parameters, in literature and practice conventionally referred to as R2, R5, R6, R7, Q1, Q2, and Q3. With the exception of parameters R2 and Q1, the remaining ones substantially depend on the position of inflection point separating the elastic linear and viscoelastic segments of the strain-vs-time curve. The standard algorithm implemented in commercially available devices relies heavily on the experimental fact that the inflection time comes about 0.1 sec after the suction switch-on/off, which depreciates credibility of parameters thus obtained. Although the Qu’s US 7,556,605 patent suggests a method of improving the precision of the inflection determination, there is still room for nonnegligible improving. In this contribution, a novel method of inflection point determination utilizing the advantageous properties of the Savitzky–Golay filtering is presented. The method allows computation of derivatives of smoothed strain-vs-time curve, more exact location of inflection and consequently more reliable values of aforementioned viscoelastic parameters. An improved applicability of the five inflection-dependent relative viscoelastic parameters is demonstrated by recasting a former study under the new method, and by comparing its results with those provided by the methods that have been used so far.

Keywords: Savitzky–Golay filter, scarring, skin, viscoelasticity

Procedia PDF Downloads 284
1289 Analysing Trends in Rice Cropping Intensity and Seasonality across the Philippines Using 14 Years of Moderate Resolution Remote Sensing Imagery

Authors: Bhogendra Mishra, Andy Nelson, Mirco Boschetti, Lorenzo Busetto, Alice Laborte

Abstract:

Rice is grown on over 100 million hectares in almost every country of Asia. It is the most important staple crop for food security and has high economic and cultural importance in Asian societies. The combination of genetic diversity and management options, coupled with the large geographic extent means that there is a large variation in seasonality (when it is grown) and cropping intensity (how often it is grown per year on the same plot of land), even over relatively small distances. Seasonality and intensity can and do change over time depending on climatic, environmental and economic factors. Detecting where and when these changes happen can provide information to better understand trends in regional and even global rice production. Remote sensing offers a unique opportunity to estimate these trends. We apply the recently published PhenoRice algorithm to 14 years of moderate resolution remote sensing (MODIS) data (utilizing 250m resolution 16 day composites from Terra and Aqua) to estimate seasonality and cropping intensity per year and changes over time. We compare the results to the surveyed data collected by International Rice Research Institute (IRRI). The study results in a unique and validated dataset on the extent and change of extent, the seasonality and change in seasonality and the cropping intensity and change in cropping intensity between 2003 and 2016 for the Philippines. Observed trends and their implications for food security and trade policies are also discussed.

Keywords: rice, cropping intensity, moderate resolution remote sensing (MODIS), phenology, seasonality

Procedia PDF Downloads 282
1288 Use of the Occupational Repetitive Action Method in Different Productive Sectors: A Literature Review 2007-2018

Authors: Aanh Eduardo Dimate-Garcia, Diana Carolina Rodriguez-Romero, Edna Yuliana Gonzalez Rincon, Diana Marcela Pardo Lopez, Yessica Garibello Cubillos

Abstract:

Musculoskeletal disorders (MD) are the new epidemic of chronic diseases, are multifactorial and affect the different productive sectors. Although there are multiple instruments to evaluate the static and dynamic load, the method of repetitive occupational action (OCRA) seems to be an attractive option. Objective: It is aimed to analyze the use of the OCRA method and the prevalence of MD in workers of various productive sectors according to the literature (2007-2018). Materials and Methods: A literature review (following the PRISMA statement) of studies aimed at assessing the level of biomechanical risk (OCRA) and the prevalence of MD in the databases Scielo, Science Direct, Scopus, ProQuest, Gale, PubMed, Lilacs and Ebsco was realized; 7 studies met the selection criteria; the majority are quantitative (cross section). Results: it was evidenced (gardening and flower-growers) in this review that 79% of the conditions related to the task require physical requirements and involve repetitive movements. In addition, of the high appearance of DM in the high-low back, upper and lower extremities that are produced by the frequency of the activities carried out (footwear production). Likewise, there was evidence of 'very high risks' of developing MD (salmon industry) and a medium index (OCRA) for repetitive movements that require special care (U-Assembly line). Conclusions: the review showed the limited use of the OCRA method for the detection of MD in workers from different sectors, and this method can be used for the detection of biomechanical risk and the appearance of MD.

Keywords: checklist, cumulative trauma disorders, musculoskeletal diseases, repetitive movements

Procedia PDF Downloads 158
1287 Long-Term Durability of Roller-Compacted Concrete Pavement

Authors: Jun Hee Lee, Young Kyu Kim, Seong Jae Hong, Chamroeun Chhorn, Seung Woo Lee

Abstract:

Roller-compacted concrete pavement (RCCP), an environmental friendly pavement of which load carry capacity benefitted from both hydration and aggregate interlock from roller compacting, demonstrated a superb structural performance for a relatively small amount of water and cement content. Even though an excellent structural performance can be secured, it is required to investigate roller-compacted concrete (RCC) under environmental loading and its long-term durability under critical conditions. In order to secure long-term durability, an appropriate internal air-void structure is required for this concrete. In this study, a method for improving the long-term durability of RCCP is suggested by analyzing the internal air-void structure and corresponding durability of RCC. The method of improving the long-term durability involves measurements of air content, air voids, and air-spacing factors in RCC that experiences changes in terms of type of air-entraining agent and its usage amount. This test is conducted according to the testing criteria in ASTM C 457, 672, and KS F 2456. It was found that the freezing-thawing and scaling resistances of RCC without any chemical admixture was quite low. Interestingly, an improvement of freezing-thawing and scaling resistances was observed for RCC with appropriate the air entraining (AE) agent content; Relative dynamic elastic modulus was found to be more than 80% for those mixtures. In RCC with AE agent mixtures, large amount of air was distributed within a range of 2% to 3%, and an air void spacing factor ranging between 200 and 300 μm (close to 250 μm, recommended by PCA) was secured. The long-term durability of RCC has a direct relationship with air-void spacing factor, and thus it can only be secured by ensuring the air void spacing factor through the inclusion of the AE in the mixture.

Keywords: durability, RCCP, air spacing factor, surface scaling resistance test, freezing and thawing resistance test

Procedia PDF Downloads 235
1286 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures

Authors: Milad Abbasi

Abstract:

Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.

Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network

Procedia PDF Downloads 125
1285 Suppressing Vibration in a Three-axis Flexible Satellite: An Approach with Composite Control

Authors: Jalal Eddine Benmansour, Khouane Boulanoir, Nacera Bekhadda, Elhassen Benfriha

Abstract:

This paper introduces a novel composite control approach that addresses the challenge of stabilizing the three-axis attitude of a flexible satellite in the presence of vibrations caused by flexible appendages. The key contribution of this research lies in the development of a disturbance observer, which effectively observes and estimates the unwanted torques induced by the vibrations. By utilizing the estimated disturbance, the proposed approach enables efficient compensation for the detrimental effects of vibrations on the satellite system. To govern the attitude angles of the spacecraft, a proportional derivative controller (PD) is specifically designed and proposed. The PD controller ensures precise control over all attitude angles, facilitating stable and accurate spacecraft maneuvering. In order to demonstrate the global stability of the system, the Lyapunov method, a well-established technique in control theory, is employed. Through rigorous analysis, the Lyapunov method verifies the convergence of system dynamics, providing strong evidence of system stability. To evaluate the performance and efficacy of the proposed control algorithm, extensive simulations are conducted. The simulation results validate the effectiveness of the combined approach, showcasing significant improvements in the stabilization and control of the satellite's attitude, even in the presence of disruptive vibrations from flexible appendages. This novel composite control approach presented in this paper contributes to the advancement of satellite attitude control techniques, offering a promising solution for achieving enhanced stability and precision in challenging operational environments.

Keywords: attitude control, flexible satellite, vibration control, disturbance observer

Procedia PDF Downloads 66
1284 A Multi-Objective Decision Making Model for Biodiversity Conservation and Planning: Exploring the Concept of Interdependency

Authors: M. Mohan, J. P. Roise, G. P. Catts

Abstract:

Despite living in an era where conservation zones are de-facto the central element in any sustainable wildlife management strategy, we still find ourselves grappling with several pareto-optimal situations regarding resource allocation and area distribution for the same. In this paper, a multi-objective decision making (MODM) model is presented to answer the question of whether or not we can establish mutual relationships between these contradicting objectives. For our study, we considered a Red-cockaded woodpecker (Picoides borealis) habitat conservation scenario in the coastal plain of North Carolina, USA. Red-cockaded woodpecker (RCW) is a non-migratory territorial bird that excavates cavities in living pine trees for roosting and nesting. The RCW groups nest in an aggregation of cavity trees called ‘cluster’ and for our model we use the number of clusters to be established as a measure of evaluating the size of conservation zone required. The case study is formulated as a linear programming problem and the objective function optimises the Red-cockaded woodpecker clusters, carbon retention rate, biofuel, public safety and Net Present Value (NPV) of the forest. We studied the variation of individual objectives with respect to the amount of area available and plotted a two dimensional dynamic graph after establishing interrelations between the objectives. We further explore the concept of interdependency by integrating the MODM model with GIS, and derive a raster file representing carbon distribution from the existing forest dataset. Model results demonstrate the applicability of interdependency from both linear and spatial perspectives, and suggest that this approach holds immense potential for enhancing environmental investment decision making in future.

Keywords: conservation, interdependency, multi-objective decision making, red-cockaded woodpecker

Procedia PDF Downloads 320
1283 Harmonic Assessment and Mitigation in Medical Diagonesis Equipment

Authors: S. S. Adamu, H. S. Muhammad, D. S. Shuaibu

Abstract:

Poor power quality in electrical power systems can lead to medical equipment at healthcare centres to malfunction and present wrong medical diagnosis. Equipment such as X-rays, computerized axial tomography, etc. can pollute the system due to their high level of harmonics production, which may cause a number of undesirable effects like heating, equipment damages and electromagnetic interferences. The conventional approach of mitigation uses passive inductor/capacitor (LC) filters, which has some drawbacks such as, large sizes, resonance problems and fixed compensation behaviours. The current trends of solutions generally employ active power filters using suitable control algorithms. This work focuses on assessing the level of Total Harmonic Distortion (THD) on medical facilities and various ways of mitigation, using radiology unit of an existing hospital as a case study. The measurement of the harmonics is conducted with a power quality analyzer at the point of common coupling (PCC). The levels of measured THD are found to be higher than the IEEE 519-1992 standard limits. The system is then modelled as a harmonic current source using MATLAB/SIMULINK. To mitigate the unwanted harmonic currents a shunt active filter is developed using synchronous detection algorithm to extract the fundamental component of the source currents. Fuzzy logic controller is then developed to control the filter. The THD without the active power filter are validated using the measured values. The THD with the developed filter show that the harmonics are now within the recommended limits.

Keywords: power quality, total harmonics distortion, shunt active filters, fuzzy logic

Procedia PDF Downloads 459
1282 An EBSD Investigation of Ti-6Al-4Nb Alloy Processed by Plan Strain Compression Test

Authors: Anna Jastrzebska, K. S. Suresh, T. Kitashima, Y. Yamabe-Mitarai, Z. Pakiela

Abstract:

Near α titanium alloys are important materials for aerospace applications, especially in high temperature applications such as jet engine. Mechanical properties of Ti alloys strongly depends on their processing route, then it is very important to understand micro-structure change by different processing. In our previous study, Nb was found to improve oxidation resistance of Ti alloys. In this study, micro-structure evolution of Ti-6Al-4Nb (wt %) alloy was investigated after plain strain compression test in hot working temperatures in the α and β phase region. High-resolution EBSD was successfully used for precise phase and texture characterization of this alloy. 1.1 kg of Ti-6Al-4Nb ingot was prepared using cold crucible levitation melting. The ingot was subsequently homogenized in 1050 deg.C for 1h followed by cooling in the air. Plate like specimens measuring 10×20×50 mm3 were cut from an ingot by electrical discharge machining (EDM). The plain strain compression test using an anvil with 10 x 35 mm in size was performed with 3 different strain rates: 0.1s-1, 1s-1and 10s-1 in 700 deg.C and 1050 deg.C to obtain 75% of deformation. The micro-structure was investigated by scanning electron microscopy (SEM) equipped with electron backscatter diffraction (EBSD) detector. The α/β phase ratio and phase morphology as well as the crystallographic texture, subgrain size, misorientation angles and misorientation gradients corresponding to each phase were determined over the middle and the edge of sample areas. The deformation mechanism in each working temperature was discussed. The evolution of texture changes with strain rate was investigated. The micro-structure obtained by plain strain compression test was heterogeneous with a wide range of grain sizes. This is because deformation and dynamic recrystallization occurred during deformation at temperature in the α and β phase. It was strongly influenced by strain rate.

Keywords: EBSD, plain strain compression test, Ti alloys

Procedia PDF Downloads 367
1281 Research on Level Adjusting Mechanism System of Large Space Environment Simulator

Authors: Han Xiao, Zhang Lei, Huang Hai, Lv Shizeng

Abstract:

Space environment simulator is a device for spacecraft test. KM8 large space environment simulator built in Tianjing Space City is the largest as well as the most advanced space environment simulator in China. Large deviation of spacecraft level will lead to abnormally work of the thermal control device in spacecraft during the thermal vacuum test. In order to avoid thermal vacuum test failure, level adjusting mechanism system is developed in the KM8 large space environment simulator as one of the most important subsystems. According to the level adjusting requirements of spacecraft’s thermal vacuum tests, the four fulcrums adjusting model is established. By means of collecting level instruments and displacement sensors data, stepping motors controlled by PLC drive four supporting legs simultaneous movement. In addition, a PID algorithm is used to control the temperature of supporting legs and level instruments which long time work under the vacuum cold and black environment in KM8 large space environment simulator during thermal vacuum tests. Based on the above methods, the data acquisition and processing, the analysis and calculation, real time adjustment and fault alarming of the level adjusting mechanism system are implemented. The level adjusting accuracy reaches 1mm/m, and carrying capacity is 20 tons. Debugging showed that the level adjusting mechanism system of KM8 large space environment simulator can meet the thermal vacuum test requirement of the new generation spacecraft. The performance and technical indicators of the level adjusting mechanism system which provides important support for the development of spacecraft in China have been ahead of similar equipment in the world.

Keywords: space environment simulator, thermal vacuum test, level adjusting, spacecraft, parallel mechanism

Procedia PDF Downloads 230
1280 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 227
1279 Decision Support System Based On GIS and MCDM to Identify Land Suitability for Agriculture

Authors: Abdelkader Mendas

Abstract:

The integration of MultiCriteria Decision Making (MCDM) approaches in a Geographical Information System (GIS) provides a powerful spatial decision support system which offers the opportunity to efficiently produce the land suitability maps for agriculture. Indeed, GIS is a powerful tool for analyzing spatial data and establishing a process for decision support. Because of their spatial aggregation functions, MCDM methods can facilitate decision making in situations where several solutions are available, various criteria have to be taken into account and decision-makers are in conflict. The parameters and the classification system used in this work are inspired from the FAO (Food and Agriculture Organization) approach dedicated to a sustainable agriculture. A spatial decision support system has been developed for establishing the land suitability map for agriculture. It incorporates the multicriteria analysis method ELECTRE Tri (ELimitation Et Choix Traduisant la REalité) in a GIS within the GIS program package environment. The main purpose of this research is to propose a conceptual and methodological framework for the combination of GIS and multicriteria methods in a single coherent system that takes into account the whole process from the acquisition of spatially referenced data to decision-making. In this context, a spatial decision support system for developing land suitability maps for agriculture has been developed. The algorithm of ELECTRE Tri is incorporated into a GIS environment and added to the other analysis functions of GIS. This approach has been tested on an area in Algeria. A land suitability map for durum wheat has been produced. Through the obtained results, it appears that ELECTRE Tri method, integrated into a GIS, is better suited to the problem of land suitability for agriculture. The coherence of the obtained maps confirms the system effectiveness.

Keywords: multicriteria decision analysis, decision support system, geographical information system, land suitability for agriculture

Procedia PDF Downloads 613
1278 Collapse Analysis of Planar Composite Frame under Impact Loads

Authors: Lian Song, Shao-Bo Kang, Bo Yang

Abstract:

Concrete filled steel tubular (CFST) structure has been widely used in construction practices due to its superior performances under various loading conditions. However, limited studies are available when this type of structure is subjected to impact or explosive loads. Current methods in relevant design codes are not specific for preventing progressive collapse of CFST structures. Therefore, it is necessary to carry out numerical simulations on CFST structure under impact loads. In this study, finite element analyses are conducted on the mechanical behaviour of composite frames which composed of CFST columns and steel beams subject to impact loading. In the model, CFST columns are simulated using finite element software ABAQUS. The model is verified by test results of solid and hollow CFST columns under lateral impacts, and reasonably good agreement is obtained through comparisons. Thereafter, a multi-scale finite element modelling technique is developed to evaluate the behaviour of a five-storey three-span planar composite frame. Alternate path method and direct simulation method are adopted to perform the dynamic response of the frame when a supporting column is removed suddenly. In the former method, the reason for column removal is not considered and only the remaining frame is simulated, whereas in the latter, a specific impact load is applied to the frame to take account of the column failure induced by vehicle impact. Comparisons are made between these two methods in terms of displacement history and internal force redistribution, and design recommendations are provided for the design of CFST structures under impact loads.

Keywords: planar composite frame, collapse analysis, impact loading, direct simulation method, alternate path method

Procedia PDF Downloads 500
1277 Weakly Solving Kalah Game Using Artificial Intelligence and Game Theory

Authors: Hiba El Assibi

Abstract:

This study aims to weakly solve Kalah, a two-player board game, by developing a start-to-finish winning strategy using an optimized Minimax algorithm with Alpha-Beta Pruning. In weakly solving Kalah, our focus is on creating an optimal strategy from the game's beginning rather than analyzing every possible position. The project will explore additional enhancements like symmetry checking and code optimizations to speed up the decision-making process. This approach is expected to give insights into efficient strategy formulation in board games and potentially help create games with a fair distribution of outcomes. Furthermore, this research provides a unique perspective on human versus Artificial Intelligence decision-making in strategic games. By comparing the AI-generated optimal moves with human choices, we can explore how seemingly advantageous moves can, in the long run, be harmful, thereby offering a deeper understanding of strategic thinking and foresight in games. Moreover, this paper discusses the evaluation of our strategy against existing methods, providing insights on performance and computational efficiency. We also discuss the scalability of our approach to the game, considering different board sizes (number of pits and stones) and rules (different variations) and studying how that affects performance and complexity. The findings have potential implications for the development of AI applications in strategic game planning, enhancing our understanding of human cognitive processes in game settings, and offer insights into creating balanced and engaging game experiences.

Keywords: minimax, alpha beta pruning, transposition tables, weakly solving, game theory

Procedia PDF Downloads 33
1276 Land Use Change Detection Using Satellite Images for Najran City, Kingdom of Saudi Arabia (KSA)

Authors: Ismail Elkhrachy

Abstract:

Determination of land use changing is an important component of regional planning for applications ranging from urban fringe change detection to monitoring change detection of land use. This data are very useful for natural resources management.On the other hand, the technologies and methods of change detection also have evolved dramatically during past 20 years. So it has been well recognized that the change detection had become the best methods for researching dynamic change of land use by multi-temporal remotely-sensed data. The objective of this paper is to assess, evaluate and monitor land use change surrounding the area of Najran city, Kingdom of Saudi Arabia (KSA) using Landsat images (June 23, 2009) and ETM+ image(June. 21, 2014). The post-classification change detection technique was applied. At last,two-time subset images of Najran city are compared on a pixel-by-pixel basis using the post-classification comparison method and the from-to change matrix is produced, the land use change information obtained.Three classes were obtained, urban, bare land and agricultural land from unsupervised classification method by using Erdas Imagine and ArcGIS software. Accuracy assessment of classification has been performed before calculating change detection for study area. The obtained accuracy is between 61% to 87% percent for all the classes. Change detection analysis shows that rapid growth in urban area has been increased by 73.2%, the agricultural area has been decreased by 10.5 % and barren area reduced by 7% between 2009 and 2014. The quantitative study indicated that the area of urban class has unchanged by 58.2 km〗^2, gained 70.3 〖km〗^2 and lost 16 〖km〗^2. For bare land class 586.4〖km〗^2 has unchanged, 53.2〖km〗^2 has gained and 101.5〖km〗^2 has lost. While agriculture area class, 20.2〖km〗^2 has unchanged, 31.2〖km〗^2 has gained and 37.2〖km〗^2 has lost.

Keywords: land use, remote sensing, change detection, satellite images, image classification

Procedia PDF Downloads 506
1275 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations

Authors: Karthikeyan Kalirajan, Ashok Joshi

Abstract:

An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.

Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection

Procedia PDF Downloads 409
1274 Clinical Parameters Response to Low Level Laser Versus Monochromatic Near Infrared Photo Energy in Diabetic Patient with Peripheral Neuropathy

Authors: Abeer Ahmed Abdehameed

Abstract:

Background: Diabetic sensorimotor polyneuropathy (DSP) is one of the most common micro vascular complications of type 2 diabetes. Loss of sensation is thought to contribute to lake of static and dynamic stability and increased risk of falling. Purpose: The purpose of this study was to compare the effects of low level laser (LLL) and monochromatic near infrared photo energy (MIRE) on pain , cutaneous sensation, static stability and index of lower limb blood flow in diabetic with peripheral neuropathy. Methods: Forty subjects with diabetic peripheral neuropathy were recruited for study. They were divided into two groups: The ( MIRE) group that included (20) patients and (LLL) group included (20) patients. All patients in the study had been subjected to various physical assessment procedures including pain, cutaneous sensation, Doppler flow meter and static stability assessments. The baseline measurements were followed by treatment sessions that conducted twice a week for 6 successive weeks. Results: The statistical analysis of the data had revealed significant improvement of the pain in both groups, with significant improvement in cutaneous sensation and static balance in (MIRE) group compared to (LLL) group; on the other hand results showed no significant differences on lower limb blood flow in both groups. Conclusion: Low level laser and monochromatic near infrared therapy can improve painful symptoms in patients with diabetic neuropathy. On the other hand (MIRE) is useful in improving cutaneous sensation and static stability in patients with diabetic neuropathy.

Keywords: diabetic neuropathy, doppler flow meter, low level laser, monochromatic near infrared photo energy

Procedia PDF Downloads 300
1273 Comparison of Various Landfill Ground Improvement Techniques for Redevelopment of Closed Landfills to Cater Transport Infrastructure

Authors: Michael D. Vinod, Hadi Khabbaz

Abstract:

Construction of infrastructure above or adjacent to landfills is becoming more common to capitalize on the limited space available within urban areas. However, development above landfills is a challenging task due to large voids, the presence of organic matter, heterogeneous nature of waste and ambiguity surrounding landfill settlement prediction. Prior to construction of infrastructure above landfills, ground improvement techniques are being employed to improve the geotechnical properties of landfill material. Although the ground improvement techniques have little impact on long term biodegradation and creep related landfill settlement, they have shown some notable short term success with a variety of techniques, including methods for verifying the level of effectiveness of ground improvement techniques. This paper provides geotechnical and landfill engineers a guideline for selection of landfill ground improvement techniques and their suitability to project-specific sites. Ground improvement methods assessed and compared in this paper include concrete injected columns (CIC), dynamic compaction, rapid impact compaction (RIC), preloading, high energy impact compaction (HEIC), vibro compaction, vibro replacement, chemical stabilization and the inclusion of geosynthetics such as geocells. For each ground improvement technique a summary of the existing theory, benefits, limitations, suitable modern ground improvement monitoring methods, the applicability of ground improvement techniques for landfills and supporting case studies are provided. The authors highlight the importance of implementing cost-effective monitoring techniques to allow observation and necessary remediation of the subsidence effects associated with long term landfill settlement. These ground improvement techniques are primarily for the purpose of construction above closed landfills to cater for transport infrastructure loading.

Keywords: closed landfills, ground improvement, monitoring, settlement, transport infrastructure

Procedia PDF Downloads 201
1272 Design and Development of a Lead-Free BiFeO₃-BaTiO₃ Quenched Ceramics for High Piezoelectric Strain Performance

Authors: Muhammad Habib, Lin Tang, Guoliang Xue, Attaur Rahman, Myong-Ho Kim, Soonil Lee, Xuefan Zhou, Yan Zhang, Dou Zhang

Abstract:

Designing a high-performance, lead-free ceramic has become a cutting-edge research topic due to growing concerns about the toxic nature of lead-based materials. In this work, a convenient strategy of compositional design and domain engineering is applied to the lead-fee BiFeO₃-BaTiO₃ ceramics, which provides a flexible polarization-free-energy profile for domain switching. Here, simultaneously enhanced dynamic piezoelectric constant (d33* = 772 pm/V) and a good thermal-stability (d33* = 26% over the temperature of 20-180 ᵒC) are achieved with a high Curie temperature (TC) of 432 ᵒC. This high piezoelectric strain performance is collectively attributed to multiple effects such as thermal quenching, suppression of defect charges by donor doping, chemically induced local structure heterogeneity, and electric field-induced phase transition. Furthermore, the addition of BT content decreased octahedral tilting, reduced anisotropy for domain switching and increased tetragonality (cₜ/aₜ), providing a wider polar length for B-site cation displacement, leading to high piezoelectric strain performance. Atomic-resolution transmission electron microscopy and piezoelectric force microscopy combined with X-ray diffraction results strongly support the origin of high piezoelectricity. The high and temperature-stable piezoelectric strain response of this work is superior to those of other lead-free ceramics. The synergistic approach of composition design and the concept present here for the origin of high strain response provides a paradigm for the development of materials for high-temperature piezoelectric actuator applications.

Keywords: Piezoelectric, BiFeO3-BaTiO3, Quenching, Temperature-insensitive

Procedia PDF Downloads 57
1271 Development of a Geomechanical Risk Assessment Model for Underground Openings

Authors: Ali Mortazavi

Abstract:

The main objective of this research project is to delve into a multitude of geomechanical risks associated with various mining methods employed within the underground mining industry. Controlling geotechnical design parameters and operational factors affecting the selection of suitable mining techniques for a given underground mining condition will be considered from a risk assessment point of view. Important geomechanical challenges will be investigated as appropriate and relevant to the commonly used underground mining methods. Given the complicated nature of rock mass in-situ and complicated boundary conditions and operational complexities associated with various underground mining methods, the selection of a safe and economic mining operation is of paramount significance. Rock failure at varying scales within the underground mining openings is always a threat to mining operations and causes human and capital losses worldwide. Geotechnical design is a major design component of all underground mines and basically dominates the safety of an underground mine. With regard to uncertainties that exist in rock characterization prior to mine development, there are always risks associated with inappropriate design as a function of mining conditions and the selected mining method. Uncertainty often results from the inherent variability of rock masse, which in turn is a function of both geological materials and rock mass in-situ conditions. The focus of this research is on developing a methodology which enables a geomechanical risk assessment of given underground mining conditions. The outcome of this research is a geotechnical risk analysis algorithm, which can be used as an aid in selecting the appropriate mining method as a function of mine design parameters (e.g., rock in-situ properties, design method, governing boundary conditions such as in-situ stress and groundwater, etc.).

Keywords: geomechanical risk assessment, rock mechanics, underground mining, rock engineering

Procedia PDF Downloads 126
1270 Mammographic Multi-View Cancer Identification Using Siamese Neural Networks

Authors: Alisher Ibragimov, Sofya Senotrusova, Aleksandra Beliaeva, Egor Ushakov, Yuri Markin

Abstract:

Mammography plays a critical role in screening for breast cancer in women, and artificial intelligence has enabled the automatic detection of diseases in medical images. Many of the current techniques used for mammogram analysis focus on a single view (mediolateral or craniocaudal view), while in clinical practice, radiologists consider multiple views of mammograms from both breasts to make a correct decision. Consequently, computer-aided diagnosis (CAD) systems could benefit from incorporating information gathered from multiple views. In this study, the introduce a method based on a Siamese neural network (SNN) model that simultaneously analyzes mammographic images from tri-view: bilateral and ipsilateral. In this way, when a decision is made on a single image of one breast, attention is also paid to two other images – a view of the same breast in a different projection and an image of the other breast as well. Consequently, the algorithm closely mimics the radiologist's practice of paying attention to the entire examination of a patient rather than to a single image. Additionally, to the best of our knowledge, this research represents the first experiments conducted using the recently released Vietnamese dataset of digital mammography (VinDr-Mammo). On an independent test set of images from this dataset, the best model achieved an AUC of 0.87 per image. Therefore, this suggests that there is a valuable automated second opinion in the interpretation of mammograms and breast cancer diagnosis, which in the future may help to alleviate the burden on radiologists and serve as an additional layer of verification.

Keywords: breast cancer, computer-aided diagnosis, deep learning, multi-view mammogram, siamese neural network

Procedia PDF Downloads 116
1269 Symbolic Status of Architectural Identity: Example of Famagusta Walled City

Authors: Rafooneh Mokhtarshahi Sani

Abstract:

This study explores how the residents of a conserved urban area have used goods and ideas as resources to maintain an enviable architectural identity. Whereas conserved urban quarters are seen as role model for maintaining architectural identity, the article describes how their residents try to give a contemporary modern image to their homes. It is argued that despite the efforts of authorities and decision makers to keep and preserve the traditional architectural identity in conserved urban areas, people have already moved on and have adjusted their homes with their preferred architectural taste. Being through such conflict of interests, have put the future of architectural identity in such places at risk. The thesis is that, on the one hand, such struggle over a desirable symbolic status in identity formation is taking place, and, on the other, it is continuously widening the gap between the real and ideal identity in the built environment. The study then analytically connects the concept of symbolic status to current identity debates. As an empirical research, this study uses systematic social and physical observation methods to describe and categorize the characteristics of settlements in Walled City of Famagusta, which symbolically represent the modern houses. The Walled City is a cultural heritage site, which most of its urban context has been conserved. Traditional houses in this area demonstrate the identity of North Cyprus architecture. The conserved residential buildings, however, either has been abandoned or went through changes by their users to present the ideal image of contemporary life. In the concluding section, the article discusses the differences between the symbolic status of people and authorities in defining a culturally valuable contemporary home. And raises the question of whether we can talk at all about architectural identity in terms of conserving the traditional style, and how we may do so on the basis of dynamic nature of identity and the necessity of its acceptance by the users.

Keywords: symbolic status, architectural identity, conservation, facades, Famagusta walled city

Procedia PDF Downloads 336
1268 Vibration Analysis of Stepped Nanoarches with Defects

Authors: Jaan Lellep, Shahid Mubasshar

Abstract:

A numerical solution is developed for simply supported nanoarches based on the non-local theory of elasticity. The nanoarch under consideration has a step-wise variable cross-section and is weakened by crack-like defects. It is assumed that the cracks are stationary and the mechanical behaviour of the nanoarch can be modeled by Eringen’s non-local theory of elasticity. The physical and thermal properties are sensitive with respect to changes of dimensions in the nano level. The classical theory of elasticity is unable to describe such changes in material properties. This is because, during the development of the classical theory of elasticity, the speculation of molecular objects was avoided. Therefore, the non-local theory of elasticity is applied to study the vibration of nanostructures and it has been accepted by many researchers. In the non-local theory of elasticity, it is assumed that the stress state of the body at a given point depends on the stress state of each point of the structure. However, within the classical theory of elasticity, the stress state of the body depends only on the given point. The system of main equations consists of equilibrium equations, geometrical relations and constitutive equations with boundary and intermediate conditions. The system of equations is solved by using the method of separation of variables. Consequently, the governing differential equations are converted into a system of algebraic equations whose solution exists if the determinant of the coefficients of the matrix vanishes. The influence of cracks and steps on the natural vibration of the nanoarches is prescribed with the aid of additional local compliance at the weakened cross-section. An algorithm to determine the eigenfrequencies of the nanoarches is developed with the help of computer software. The effects of various physical and geometrical parameters are recorded and drawn graphically.

Keywords: crack, nanoarches, natural frequency, step

Procedia PDF Downloads 114
1267 Strategic Alliances and Creative Synergy within European Union: A Theoretical Perspective

Authors: Maha Tichetti, Barzi Redouane, Selim Kanat

Abstract:

In the European Union (EU), where economic, political, and cultural ties converge, strategic alliances play a pivotal role in shaping the collaborative landscape. This paper embarks on a journey into the EuroSphere, offering a comprehensive analysis review that unravels the dynamics of these alliances within the European context. The focus is specifically directed towards understanding their profound impact on creative synergy and innovation among teams. In our analysis, we provide theoretical explanations for key terms such as "creative synergy" and "strategic alliances." We outline various types of competitive strategies, delve into the motivations prompting the formation of strategic alliances, and critically examine the success and failure factors in these kinds of collaboration. Additionally, we explore the goals achievable through strategic alliances, especially in the context of external growth. A central focus of this paper focus on how strategic alliances can significantly impact creative synergy within the European landscape. Through a theoretical lens, we explore the interplay between collaborative strategies and the enhancement of creative thinking within teams engaged in strategic alliances. The article goes beyond theoretical frameworks to present a tangible example of a strategic alliance emerging in the European market. This case study illuminates how such alliances have empowered European companies to enhance their competitive positions on the global stage while concurrently fostering creative synergy among their teams. This comprehensive review not only contributes to the theoretical understanding of strategic alliances and creative synergy but also offers practical insights for businesses navigating the collaborative landscape within the EuroSphere. As we unravel the complexities of these alliances, we uncover valuable lessons and opportunities for future research, providing a roadmap for those seeking to harness the full potential of strategic collaborations in the dynamic European context.

Keywords: European Union, strategic alliances, creative synergy, competitiveness

Procedia PDF Downloads 40
1266 Fatigue Life Prediction under Variable Loading Based a Non-Linear Energy Model

Authors: Aid Abdelkrim

Abstract:

A method of fatigue damage accumulation based upon application of energy parameters of the fatigue process is proposed in the paper. Using this model is simple, it has no parameter to be determined, it requires only the knowledge of the curve W–N (W: strain energy density N: number of cycles at failure) determined from the experimental Wöhler curve. To examine the performance of nonlinear models proposed in the estimation of fatigue damage and fatigue life of components under random loading, a batch of specimens made of 6082 T 6 aluminium alloy has been studied and some of the results are reported in the present paper. The paper describes an algorithm and suggests a fatigue cumulative damage model, especially when random loading is considered. This work contains the results of uni-axial random load fatigue tests with different mean and amplitude values performed on 6082T6 aluminium alloy specimens. The proposed model has been formulated to take into account the damage evolution at different load levels and it allows the effect of the loading sequence to be included by means of a recurrence formula derived for multilevel loading, considering complex load sequences. It is concluded that a ‘damaged stress interaction damage rule’ proposed here allows a better fatigue damage prediction than the widely used Palmgren–Miner rule, and a formula derived in random fatigue could be used to predict the fatigue damage and fatigue lifetime very easily. The results obtained by the model are compared with the experimental results and those calculated by the most fatigue damage model used in fatigue (Miner’s model). The comparison shows that the proposed model, presents a good estimation of the experimental results. Moreover, the error is minimized in comparison to the Miner’s model.

Keywords: damage accumulation, energy model, damage indicator, variable loading, random loading

Procedia PDF Downloads 379