Search results for: algorithm techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9867

Search results for: algorithm techniques

7377 Joint Replenishment and Heterogeneous Vehicle Routing Problem with Cyclical Schedule

Authors: Ming-Jong Yao, Chin-Sum Shui, Chih-Han Wang

Abstract:

This paper is developed based on a real-world decision scenario that an industrial gas company that applies the Vendor Managed Inventory model and supplies liquid oxygen with a self-operated heterogeneous vehicle fleet to hospitals in nearby cities. We name it as a Joint Replenishment and Heterogeneous Vehicle Routing Problem with Cyclical Schedule and formulate it as a non-linear mixed-integer linear programming problem which simultaneously determines the length of the planning cycle (PC), the length of the replenishment cycle and the dates of replenishment for each customer and the vehicle routes of each day within PC, such that the average daily operation cost within PC, including inventory holding cost, setup cost, transportation cost, and overtime labor cost, is minimized. A solution method based on genetic algorithm, embedded with an encoding and decoding mechanism and local search operators, is then proposed, and the hash function is adopted to avoid repetitive fitness evaluation for identical solutions. Numerical experiments demonstrate that the proposed solution method can effectively solve the problem under different lengths of PC and number of customers. The method is also shown to be effective in determining whether the company should expand the storage capacity of a customer whose demand increases. Sensitivity analysis of the vehicle fleet composition shows that deploying a mixed fleet can reduce the daily operating cost.

Keywords: cyclic inventory routing problem, joint replenishment, heterogeneous vehicle, genetic algorithm

Procedia PDF Downloads 87
7376 Chaotic Sequence Noise Reduction and Chaotic Recognition Rate Improvement Based on Improved Local Geometric Projection

Authors: Rubin Dan, Xingcai Wang, Ziyang Chen

Abstract:

A chaotic time series noise reduction method based on the fusion of the local projection method, wavelet transform, and particle swarm algorithm (referred to as the LW-PSO method) is proposed to address the problem of false recognition due to noise in the recognition process of chaotic time series containing noise. The method first uses phase space reconstruction to recover the original dynamical system characteristics and removes the noise subspace by selecting the neighborhood radius; then it uses wavelet transform to remove D1-D3 high-frequency components to maximize the retention of signal information while least-squares optimization is performed by the particle swarm algorithm. The Lorenz system containing 30% Gaussian white noise is simulated and verified, and the phase space, SNR value, RMSE value, and K value of the 0-1 test method before and after noise reduction of the Schreiber method, local projection method, wavelet transform method, and LW-PSO method are compared and analyzed, which proves that the LW-PSO method has a better noise reduction effect compared with the other three common methods. The method is also applied to the classical system to evaluate the noise reduction effect of the four methods and the original system identification effect, which further verifies the superiority of the LW-PSO method. Finally, it is applied to the Chengdu rainfall chaotic sequence for research, and the results prove that the LW-PSO method can effectively reduce the noise and improve the chaos recognition rate.

Keywords: Schreiber noise reduction, wavelet transform, particle swarm optimization, 0-1 test method, chaotic sequence denoising

Procedia PDF Downloads 199
7375 Dual-Actuated Vibration Isolation Technology for a Rotary System’s Position Control on a Vibrating Frame: Disturbance Rejection and Active Damping

Authors: Kamand Bagherian, Nariman Niknejad

Abstract:

A vibration isolation technology for precise position control of a rotary system powered by two permanent magnet DC (PMDC) motors is proposed, where this system is mounted on an oscillatory frame. To achieve vibration isolation for this system, active damping and disturbance rejection (ADDR) technology is presented which introduces a cooperation of a main and an auxiliary PMDC, controlled by discrete-time sliding mode control (DTSMC) based schemes. The controller of the main actuator tracks a desired position and the auxiliary actuator simultaneously isolates the induced vibration, as its controller follows a torque trend. To determine this torque trend, a combination of two algorithms is introduced by the ADDR technology. The first torque-trend producing algorithm rejects the disturbance by counteracting the perturbation, estimated using a model-based observer. The second torque trend applies active variable damping to minimize the oscillation of the output shaft. In this practice, the presented technology is implemented on a rotary system with a pendulum attached, mounted on a linear actuator simulating an oscillation-transmitting structure. In addition, the obtained results illustrate the functionality of the proposed technology.

Keywords: active damping, discrete-time nonlinear controller, disturbance tracking algorithm, oscillation transmitting support, position control, stability robustness, vibration isolation

Procedia PDF Downloads 104
7374 Removal of Textile Dye from Industrial Wastewater by Natural and Modified Diatomite

Authors: Hakim Aguedal, Abdelkader Iddou, Abdallah Aziz, Djillali Reda Merouani, Ferhat Bensaleh, Saleh Bensadek

Abstract:

The textile industry produces high amount of colored effluent each year. The management or treatment of these discharges depends on the applied techniques. Adsorption is one of wastewater treatment techniques destined to treat this kind of pollution, and the performance and efficiency predominantly depend on the nature of the adsorbent used. Therefore, scientific research is directed towards the development of new materials using different physical and chemical treatments to improve their adsorption capacities. In the same perspective, we looked at the effect of the heat treatment on the effectiveness of diatomite, which is found in abundance in Algeria. The textile dye Orange Bezaktiv (SRL-150) which is used as organic pollutants in this study is provided by the textile company SOITEXHAM in Oran city (west Algeria). The effect of different physicochemical parameters on the adsorption of SRL-150 on natural and modified diatomite is studied, and the results of the kinetics and adsorption isotherms were modeled.

Keywords: wastewater treatment, diatomite, adsorption, dye pollution, kinetic, isotherm

Procedia PDF Downloads 280
7373 Penetration of Social Media in Primary Education to Nurture Learning Habits in Toddlers during Covid-19

Authors: Priyadarshini Kiran, Gulshan Kumar

Abstract:

: Social media are becoming the most important tools for interaction among learners, pedagogues and parents where everybody can share, exchange, comment, discuss and create information and knowledge in a collaborative way. The present case study attempts to highlight the role of social media (WhatsApp) in nurturing learning habits in toddlers with the help of parents in primary education. The Case study is based on primary data collected from a primary school situated in a small town in the northern state of Uttar Pradesh, India. In research methodology, survey and structured interviews have been used as a tool collected from parents and pedagogues. The findings Suggest: - To nurture learning habits in toddlers, parents and pedagogues use social media site (WhatsApp) in real-time and that too is convenient and handy; - Skill enhancement on the part of Pedagogues as a result of employing innovative teaching-learning techniques; - Social media sites serve as a social connectivity tool to ward off negativity and monotony on the part of parents and pedagogues in the wake of COVID- 19

Keywords: innovative teaching-learning techniques, pedagogues, social media, nurture, toddlers

Procedia PDF Downloads 175
7372 Role of Feedbacks in Simulation-Based Learning

Authors: Usman Ghani

Abstract:

Feedback is a vital element for improving student learning in a simulation-based training as it guides and refines learning through scaffolding. A number of studies in literature have shown that students’ learning is enhanced when feedback is provided with personalized tutoring that offers specific guidance and adapts feedback to the learner in a one-to-one environment. Thus, emulating these adaptive aspects of human tutoring in simulation provides an effective methodology to train individuals. This paper presents the results of a study that investigated the effectiveness of automating different types of feedback techniques such as Knowledge-of-Correct-Response (KCR) and Answer-Until- Correct (AUC) in software simulation for learning basic information technology concepts. For the purpose of comparison, techniques like simulation with zero or no-feedback (NFB) and traditional hands-on (HON) learning environments are also examined. The paper presents the summary of findings based on quantitative analyses which reveal that the simulation based instructional strategies are at least as effective as hands-on teaching methodologies for the purpose of learning of IT concepts. The paper also compares the results of the study with the earlier studies and recommends strategies for using feedback mechanism to improve students’ learning in designing and simulation-based IT training.

Keywords: simulation, feedback, training, hands-on, labs

Procedia PDF Downloads 377
7371 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: airborne laser scanning, digital terrain models, filtering, forested areas

Procedia PDF Downloads 139
7370 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging

Procedia PDF Downloads 156
7369 Optimizing Sustainable Graphene Production: Extraction of Graphite from Spent Primary and Secondary Batteries for Advanced Material Synthesis

Authors: Pratima Kumari, Sukha Ranjan Samadder

Abstract:

This research aims to contribute to the sustainable production of graphene materials by exploring the extraction of graphite from spent primary and secondary batteries. The increasing demand for graphene materials, a versatile and high-performance material, necessitates environmentally friendly methods for its synthesis. The process involves a well-planned methodology, beginning with the gathering and categorization of batteries, followed by the disassembly and careful removal of graphite from anode structures. The use of environmentally friendly solvents and mechanical techniques ensures an efficient and eco-friendly extraction of graphite. Advanced approaches such as the modified Hummers' method and chemical reduction process are utilized for the synthesis of graphene materials, with a focus on optimizing parameters. Various analytical techniques such as Fourier-transform infrared spectroscopy, X-ray diffraction, scanning electron microscopy, thermogravimetric analysis, and Raman spectroscopy were employed to validate the quality and structure of the produced graphene materials. The major findings of this study reveal the successful implementation of the methodology, leading to the production of high-quality graphene materials suitable for advanced material applications. Thorough characterization using various advanced techniques validates the structural integrity and purity of the graphene. The economic viability of the process is demonstrated through a comprehensive economic analysis, highlighting the potential for large-scale production. This research contributes to the field of sustainable production of graphene materials by offering a systematic methodology that efficiently transforms spent batteries into valuable graphene resources. Furthermore, the findings not only showcase the potential for upcycling electronic waste but also address the pressing need for environmentally conscious processes in advanced material synthesis.

Keywords: spent primary batteries, spent secondary batteries, graphite extraction, advanced material synthesis, circular economy approach

Procedia PDF Downloads 54
7368 Factors Affecting the Results of in vitro Gas Production Technique

Authors: O. Kahraman, M. S. Alatas, O. B. Citil

Abstract:

In determination of values of feeds which, are used in ruminant nutrition, different methods are used like in vivo, in vitro, in situ or in sacco. Generally, the most reliable results are taken from the in vivo studies. But because of the disadvantages like being hard, laborious and expensive, time consuming, being hard to keep the experiment conditions under control and too much samples are needed, the in vitro techniques are more preferred. The most widely used in vitro techniques are two-staged digestion technique and gas production technique. In vitro gas production technique is based on the measurement of the CO2 which is released as a result of microbial fermentation of the feeds. In this review, the factors affecting the results obtained from in vitro gas production technique (Hohenheim Feed Test) were discussed. Some factors must be taken into consideration when interpreting the findings obtained in these studies and also comparing the findings reported by different researchers for the same feeds. These factors were discussed in 3 groups: factors related to animal, factors related to feeds and factors related with differences in the application of method. These factors and their effects on the results were explained. Also it can be concluded that the use of in vitro gas production technique in feed evaluation routinely can be contributed to the comprehensive feed evaluation, but standardization is needed in this technique to attain more reliable results.

Keywords: In vitro, gas production technique, Hohenheim feed test, standardization

Procedia PDF Downloads 599
7367 A New Approach in a Problem of a Supersonic Panel Flutter

Authors: M. V. Belubekyan, S. R. Martirosyan

Abstract:

On the example of an elastic rectangular plate streamlined by a supersonic gas flow, we have investigated the phenomenon of divergence and of panel flatter of the overrunning of the gas flow at a free edge under assumption of the presence of concentrated inertial masses and moments at the free edge. We applied a new approach of finding of solution of these problems, which was developed based on the algorithm for an analytical solution finding. This algorithm is easy to use for theoretical studies for the wides circle of nonconservative problems of linear elastic stability. We have established the relation between the characteristics of natural vibrations of the plate and velocity of the streamlining gas flow, which enables one to draw some conclusions on the stability of disturbed motion of the plate depending on the parameters of the system plate-flow. Its solution shows that either the divergence or the localized divergence and the flutter instability are possible. The regions of the stability and instability in space of parameters of the problem are identified. We have investigated the dynamic behavior of the disturbed motion of the panel near the boundaries of region of the stability. The safe and dangerous boundaries of region of the stability are found. The transition through safe boundary of the region of the stability leads to the divergence or localized divergence arising in the vicinity of free edge of the rectangular plate. The transition through dangerous boundary of the region of the stability leads to the panel flutter. The deformations arising at the flutter are more dangerous to the skin of the modern aircrafts and rockets resulting to the loss of the strength and appearance of the fatigue cracks.

Keywords: stability, elastic plate, divergence, localized divergence, supersonic panels flutter

Procedia PDF Downloads 461
7366 Formulation and Evaluation of Silver Nanoparticles as Drug Carrier for Cancer Therapy

Authors: Abdelhadi Adam Salih Denei

Abstract:

Silver nanoparticles (AgNPs) have been used in cancer therapy, and the area of nanomedicine has made unheard-of strides in recent years. A thorough summary of the development and assessment of AgNPs for their possible use in the fight against cancer is the goal of this review. Targeted delivery methods have been designed to optimise therapeutic efficacy by using AgNPs' distinct physicochemical features, such as their size, shape, and surface chemistry. Firstly, the study provides an overview of the several synthesis routes—both chemical and green—that are used to create AgNPs. Natural extracts and biomolecules are used in green synthesis techniques, which are becoming more and more popular since they are biocompatible and environmentally benign. It is next described how synthesis factors affect the physicochemical properties of AgNPs, emphasising how crucial it is to modify these parameters for particular therapeutic uses. An extensive analysis is conducted on the anticancer potential of AgNPs, emphasising their capacity to trigger apoptosis, impede angiogenesis, and alter cellular signalling pathways. The analysis also investigates the potential benefits of combining AgNPs with currently used cancer treatment techniques, including radiation and chemotherapy. AgNPs' safety profile for use in clinical settings is clarified by a comprehensive evaluation of their cytotoxicity and biocompatibility.

Keywords: silver nanoparticles, cancer, nanocarrier system, targeted delivery

Procedia PDF Downloads 66
7365 THz Phase Extraction Algorithms for a THz Modulating Interferometric Doppler Radar

Authors: Shaolin Allen Liao, Hual-Te Chien

Abstract:

Various THz phase extraction algorithms have been developed for a novel THz Modulating Interferometric Doppler Radar (THz-MIDR) developed recently by the author. The THz-MIDR differs from the well-known FTIR technique in that it introduces a continuously modulating reference branch, compared to the time-consuming discrete FTIR stepping reference branch. Such change allows real-time tracking of a moving object and capturing of its Doppler signature. The working principle of the THz-MIDR is similar to the FTIR technique: the incoming THz emission from the scene is split by a beam splitter/combiner; one of the beams is continuously modulated by a vibrating mirror or phase modulator and the other split beam is reflected by a reflection mirror; finally both the modulated reference beam and reflected beam are combined by the same beam splitter/combiner and detected by a THz intensity detector (for example, a pyroelectric detector). In order to extract THz phase from the single intensity measurement signal, we have derived rigorous mathematical formulas for 3 Frequency Banded (FB) signals: 1) DC Low-Frequency Banded (LFB) signal; 2) Fundamental Frequency Banded (FFB) signal; and 3) Harmonic Frequency Banded (HFB) signal. The THz phase extraction algorithms are then developed based combinations of 2 or all of these 3 FB signals with efficient algorithms such as Levenberg-Marquardt nonlinear fitting algorithm. Numerical simulation has also been performed in Matlab with simulated THz-MIDR interferometric signal of various Signal to Noise Ratio (SNR) to verify the algorithms.

Keywords: algorithm, modulation, THz phase, THz interferometry doppler radar

Procedia PDF Downloads 345
7364 The Optimal Irrigation in the Mitidja Plain

Authors: Gherbi Khadidja

Abstract:

In the Mediterranean region, water resources are limited and very unevenly distributed in space and time. The main objective of this project is the development of a wireless network for the management of water resources in northern Algeria, the Mitidja plain, which helps farmers to irrigate in the most optimized way and solve the problem of water shortage in the region. Therefore, we will develop an aid tool that can modernize and replace some traditional techniques, according to the real needs of the crops and according to the soil conditions as well as the climatic conditions (soil moisture, precipitation, characteristics of the unsaturated zone), These data are collected in real-time by sensors and analyzed by an algorithm and displayed on a mobile application and the website. The results are essential information and alerts with recommendations for action to farmers to ensure the sustainability of the agricultural sector under water shortage conditions. In the first part: We want to set up a wireless sensor network, for precise management of water resources, by presenting another type of equipment that allows us to measure the water content of the soil, such as the Watermark probe connected to the sensor via the acquisition card and an Arduino Uno, which allows collecting the captured data and then program them transmitted via a GSM module that will send these data to a web site and store them in a database for a later study. In a second part: We want to display the results on a website or a mobile application using the database to remotely manage our smart irrigation system, which allows the farmer to use this technology and offers the possibility to the growers to access remotely via wireless communication to see the field conditions and the irrigation operation, at home or at the office. The tool to be developed will be based on satellite imagery as regards land use and soil moisture. These tools will make it possible to follow the evolution of the needs of the cultures in time, but also to time, and also to predict the impact on water resources. According to the references consulted, if such a tool is used, it can reduce irrigation volumes by up to up to 40%, which represents more than 100 million m3 of savings per year for the Mitidja. This volume is equivalent to a medium-size dam.

Keywords: optimal irrigation, soil moisture, smart irrigation, water management

Procedia PDF Downloads 109
7363 Efficient Fuzzy Classified Cryptographic Model for Intelligent Encryption Technique towards E-Banking XML Transactions

Authors: Maher Aburrous, Adel Khelifi, Manar Abu Talib

Abstract:

Transactions performed by financial institutions on daily basis require XML encryption on large scale. Encrypting large volume of message fully will result both performance and resource issues. In this paper a novel approach is presented for securing financial XML transactions using classification data mining (DM) algorithms. Our strategy defines the complete process of classifying XML transactions by using set of classification algorithms, classified XML documents processed at later stage using element-wise encryption. Classification algorithms were used to identify the XML transaction rules and factors in order to classify the message content fetching important elements within. We have implemented four classification algorithms to fetch the importance level value within each XML document. Classified content is processed using element-wise encryption for selected parts with "High", "Medium" or “Low” importance level values. Element-wise encryption is performed using AES symmetric encryption algorithm and proposed modified algorithm for AES to overcome the problem of computational overhead, in which substitute byte, shift row will remain as in the original AES while mix column operation is replaced by 128 permutation operation followed by add round key operation. An implementation has been conducted using data set fetched from e-banking service to present system functionality and efficiency. Results from our implementation showed a clear improvement in processing time encrypting XML documents.

Keywords: XML transaction, encryption, Advanced Encryption Standard (AES), XML classification, e-banking security, fuzzy classification, cryptography, intelligent encryption

Procedia PDF Downloads 411
7362 A Comparative Study of Natural Language Processing Models for Detecting Obfuscated Text

Authors: Rubén Valcarce-Álvarez, Francisco Jáñez-Martino, Rocío Alaiz-Rodríguez

Abstract:

Cybersecurity challenges, including scams, drug sales, the distribution of child sexual abuse material, fake news, and hate speech on both the surface and deep web, have significantly increased over the past decade. Users who post such content often employ strategies to evade detection by automated filters. Among these tactics, text obfuscation plays an essential role in deceiving detection systems. This approach involves modifying words to make them more difficult for automated systems to interpret while remaining sufficiently readable for human users. In this work, we aim at spotting obfuscated words and the employed techniques, such as leetspeak, word inversion, punctuation changes, and mixed techniques. We benchmark Named Entity Recognition (NER) using models from the BERT family as well as two large language models (LLMs), Llama and Mistral, on XX_NER_WordCamouflage dataset. Our experiments evaluate these models by comparing their precision, recall, F1 scores, and accuracy, both overall and for each individual class.

Keywords: natural language processing (NLP), text obfuscation, named entity recognition (NER), deep learning

Procedia PDF Downloads 2
7361 The Effect of Ethnomathematics on School Mathematics in Kano State Junior Secondary Schools

Authors: Surajo Isa

Abstract:

In as much as mathematics is important to national development, it is regrettable to note that in Nigeria Students academic achievement especially in public examinations remains poor. Among the several factors responsible for such a poor performance is the lack of bringing cultural elements into the conventional school mathematics. The design for this study is triangulation in nature which is set to examined 800 students From 20 School (40 each from male and female schools). Ten (10) male and ten (10) female schools consisting of 400 male and 400 female students to formed the experiment and control groups with a further sub-groping of samples to represent urban and rural settings for both male and female groups. While the experimental groups were taught using ethnomathematics techniques, the control groups were taught using conventional techniques, the results of a t-test for independent samples at p =0.05 level of significance with tcritical = 1.968 showed that (a) boys performed significantly better than girls (b) there is no significantly difference in performance between urban and rural girls (c) significant difference in academic performance was obtained between urban and rural boys. Generally, it was observed that teaching mathematics with ethnomathematics technique would help in great achievement in mathematics.

Keywords: ethnomathematics, achievement, gender, settlement

Procedia PDF Downloads 223
7360 Study of Behavior Tribological Cutting Tools Based on Coating

Authors: A. Achour L. Chekour, A. Mekroud

Abstract:

Tribology, the science of lubrication, friction and wear, plays an important role in science "crossroads" initiated by the recent developments in the industry. Its multidisciplinary nature reinforces its scientific interest. It covers all the sciences that deal with the contact between two solids loaded and relative motion. It is thus one of the many intersections more clearly established disciplines such as solid mechanics and the fluids, rheological, thermal, materials science and chemistry. As for his experimental approach, it is based on the physical and processing signals and images. The optimization of operating conditions by cutting tool must contribute significantly to the development and productivity of advanced automation of machining techniques because their implementation requires sufficient knowledge of how the process and in particular the evolution of tool wear. In addition, technological advances have developed the use of very hard materials, refractory difficult machinability, requiring highly resistant materials tools. In this study, we present the behavior wear a machining tool during the roughing operation according to the cutting parameters. The interpretation of the experimental results is based mainly on observations and analyzes of sharp edges e tool using the latest techniques: scanning electron microscopy (SEM) and optical rugosimetry laser beam.

Keywords: friction, wear, tool, cutting

Procedia PDF Downloads 331
7359 Reconstruction Post-mastectomy: A Literature Review on Its Indications and Techniques

Authors: Layaly Ayoub, Mariana Ribeiro

Abstract:

Introduction: Breast cancer is currently considered the leading cause of cancer-related deaths among women in Brazil. Mastectomy, essential in this treatment, often necessitates subsequent breast reconstruction to restore physical appearance and aid in the emotional and psychological recovery of patients. The choice between immediate or delayed reconstruction is influenced by factors such as the type and stage of cancer, as well as the patient's overall health. The decision between autologous breast reconstruction or implant-based reconstruction requires a detailed analysis of individual conditions and needs. Objectives: This study analyzes the techniques and indications used in post-mastectomy breast reconstruction. Methodology: Literature review conducted in the PubMed and SciELO databases, focusing on articles that met the inclusion and exclusion criteria and descriptors. Results: After mastectomy, breast reconstruction is commonly performed. It is necessary to determine the type of technique to be used in each case depending on the specific characteristics of each patient. The tissue expander technique is indicated for patients with sufficient skin and tissue post-mastectomy, who do not require additional radiotherapy, and who opt for a less complex surgery with a shorter recovery time. This procedure promotes the gradual expansion of soft tissues where the definitive implant will be placed. Both temporary and permanent expanders offer flexibility, allowing for adjustment in the expander size until the desired volume is reached, enabling the skin and tissues to adapt to the breast implant area. Conversely, autologous reconstruction is indicated for patients who will undergo radiotherapy, have insufficient tissue, and prefer a more natural solution. This technique uses the transverse rectus abdominis muscle (TRAM) flap, the latissimus dorsi muscle flap, the gluteal flap, and local muscle flaps to shape a new breast, potentially combined with a breast implant. Conclusion: In this context, it is essential to conduct a thorough evaluation regarding the technique to be applied, as both have their benefits and challenges.

Keywords: indications, post-mastectomy, breast reconstruction, techniques

Procedia PDF Downloads 29
7358 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 265
7357 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 323
7356 Digital Platform for Psychological Assessment Supported by Sensors and Efficiency Algorithms

Authors: Francisco M. Silva

Abstract:

Technology is evolving, creating an impact on our everyday lives and the telehealth industry. Telehealth encapsulates the provision of healthcare services and information via a technological approach. There are several benefits of using web-based methods to provide healthcare help. Nonetheless, few health and psychological help approaches combine this method with wearable sensors. This paper aims to create an online platform for users to receive self-care help and information using wearable sensors. In addition, researchers developing a similar project obtain a solid foundation as a reference. This study provides descriptions and analyses of the software and hardware architecture. Exhibits and explains a heart rate dynamic and efficient algorithm that continuously calculates the desired sensors' values. Presents diagrams that illustrate the website deployment process and the webserver means of handling the sensors' data. The goal is to create a working project using Arduino compatible hardware. Heart rate sensors send their data values to an online platform. A microcontroller board uses an algorithm to calculate the sensor heart rate values and outputs it to a web server. The platform visualizes the sensor's data, summarizes it in a report, and creates alerts for the user. Results showed a solid project structure and communication from the hardware and software. The web server displays the conveyed heart rate sensor's data on the online platform, presenting observations and evaluations.

Keywords: Arduino, heart rate BPM, microcontroller board, telehealth, wearable sensors, web-based healthcare

Procedia PDF Downloads 126
7355 Diagnosis of Choledocholithiasis with Endosonography

Authors: A. Kachmazova, A. Shadiev, Y. Teterin, P. Yartcev

Abstract:

Introduction: Biliary calculi disease (LCS) still occupies the leading position among urgent diseases of the abdominal cavity, manifesting itself from asymptomatic course to life-threatening states. Nowadays arsenal of diagnostic methods for choledocholithiasis is quite wide: ultrasound, hepatobiliscintigraphy (HBSG), magnetic resonance imaging (MRI), endoscopic retrograde cholangiography (ERCP). Among them, transabdominal ultrasound (TA ultrasound) is the most accessible and routine diagnostic method. Nowadays ERCG is the "gold" standard in diagnosis and one-stage treatment of biliary tract obstruction. However, transpapillary techniques are accompanied by serious postoperative complications (postmanipulative pancreatitis (3-5%), endoscopic papillosphincterotomy bleeding (2%), cholangitis (1%)), the lethality being 0.4%. GBSG and MRI are also quite informative methods in the diagnosis of choledocholithiasis. Small size of concrements, their localization in intrapancreatic and retroduodenal part of common bile duct significantly reduces informativity of all diagnostic methods described above, that demands additional studying of this problem. Materials and Methods: 890 patients with the diagnosis of cholelithiasis (calculous cholecystitis) were admitted to the Sklifosovsky Scientific Research Institute of Hospital Medicine in the period from August, 2020 to June, 2021. Of them 115 people with mechanical jaundice caused by concrements in bile ducts. Results: Final EUS diagnosis was made in all patients (100,0%). In all patients in whom choledocholithiasis diagnosis was revealed or confirmed after EUS, ERCP was performed urgently (within two days from the moment of its detection) as the X-ray operation room was provided; it confirmed the presence of concrements. All stones were removed by lithoextraction using Dormia basket. The postoperative period in these patients had no complications. Conclusions: EUS is the most informative and safe diagnostic method, which allows to detect choledocholithiasis in patients with discrepancies between clinical-laboratory and instrumental methods of diagnosis in shortest time, that in its turn will help to decide promptly on the further tactics of patient treatment. We consider it reasonable to include EUS in the diagnostic algorithm for choledocholithiasis. Disclosure: Nothing to disclose.

Keywords: endoscopic ultrasonography, choledocholithiasis, common bile duct, concrement, ERCP

Procedia PDF Downloads 85
7354 Scheduling Method for Electric Heater in HEMS considering User’s Comfort

Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo, Jin-O Kim

Abstract:

Home Energy Management System (HEMS) which makes the residential consumers contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it also represents impacts of the comfort level on the scheduling result.

Keywords: load scheduling, usage pattern, user’s comfort, copula function, branch and bound, electric heater

Procedia PDF Downloads 585
7353 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models

Authors: Ravi Ande, Mousumi Hazari

Abstract:

One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.

Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine

Procedia PDF Downloads 92
7352 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique

Authors: Dibakar Chakrabarty, Mebada Suiting

Abstract:

Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.

Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM

Procedia PDF Downloads 248
7351 Transforming Data Science Curriculum Through Design Thinking

Authors: Samar Swaid

Abstract:

Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.

Keywords: data science, design thinking, AI, currculum, transformation

Procedia PDF Downloads 81
7350 Business Domain Modelling Using an Integrated Framework

Authors: Mohammed Hasan Salahat, Stave Wade

Abstract:

This paper presents an application of a “Systematic Soft Domain Driven Design Framework” as a soft systems approach to domain-driven design of information systems development. The framework combining techniques from Soft Systems Methodology (SSM), the Unified Modeling Language (UML), and an implementation pattern knows as ‘Naked Objects’. This framework have been used in action research projects that have involved the investigation and modeling of business processes using object-oriented domain models and the implementation of software systems based on those domain models. Within this framework, Soft Systems Methodology (SSM) is used as a guiding methodology to explore the problem situation and to develop the domain model using UML for the given business domain. The framework is proposed and evaluated in our previous works, and a real case study ‘Information Retrieval System for Academic Research’ is used, in this paper, to show further practice and evaluation of the framework in different business domain. We argue that there are advantages from combining and using techniques from different methodologies in this way for business domain modeling. The framework is overviewed and justified as multi-methodology using Mingers Multi-Methodology ideas.

Keywords: SSM, UML, domain-driven design, soft domain-driven design, naked objects, soft language, information retrieval, multimethodology

Procedia PDF Downloads 560
7349 Integrated Geophysical Surveys for Sinkhole and Subsidence Vulnerability Assessment, in the West Rand Area of Johannesburg

Authors: Ramoshweu Melvin Sethobya, Emmanuel Chirenje, Mihlali Hobo, Simon Sebothoma

Abstract:

The recent surge in residential infrastructure development around the metropolitan areas of South Africa has necessitated conditions for thorough geotechnical assessments to be conducted prior to site developments to ensure human and infrastructure safety. This paper appraises the success in the application of multi-method geophysical techniques for the delineation of sinkhole vulnerability in a residential landscape. Geophysical techniques ERT, MASW, VES, Magnetics and gravity surveys were conducted to assist in mapping sinkhole vulnerability, using an existing sinkhole as a constraint at Venterspost town, West of Johannesburg city. A combination of different geophysical techniques and results integration from those proved to be useful in the delineation of the lithologic succession around sinkhole locality, and determining the geotechnical characteristics of each layer for its contribution to the development of sinkholes, subsidence and cavities at the vicinity of the site. Study results have also assisted in the determination of the possible depth extension of the currently existing sinkhole and the location of sites where other similar karstic features and sinkholes could form. Results of the ERT, VES and MASW surveys have uncovered dolomitic bedrock at varying depths around the sites, which exhibits high resistivity values in the range 2500-8000ohm.m and corresponding high velocities in the range 1000-2400 m/s. The dolomite layer was found to be overlain by a weathered chert-poor dolomite layer, which has resistivities between the range 250-2400ohm.m, and velocities ranging from 500-600m/s, from which the large sinkhole has been found to collapse/ cave in. A compiled 2.5D high resolution Shear Wave Velocity (Vs) map of the study area was created using 2D profiles of MASW data, offering insights into the prevailing lithological setup conducive for formation various types of karstic features around the site. 3D magnetic models of the site highlighted the regions of possible subsurface interconnections between the currently existing large sinkhole and the other subsidence feature at the site. A number of depth slices were used to detail the conditions near the sinkhole as depth increases. Gravity surveys results mapped the possible formational pathways for development of new karstic features around the site. Combination and correlation of different geophysical techniques proved useful in delineation of the site geotechnical characteristics and mapping the possible depth extend of the currently existing sinkhole.

Keywords: resistivity, magnetics, sinkhole, gravity, karst, delineation, VES

Procedia PDF Downloads 80
7348 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 122