Search results for: e-content producing algorithm
940 Multi Object Tracking for Predictive Collision Avoidance
Authors: Bruk Gebregziabher
Abstract:
The safe and efficient operation of Autonomous Mobile Robots (AMRs) in complex environments, such as manufacturing, logistics, and agriculture, necessitates accurate multiobject tracking and predictive collision avoidance. This paper presents algorithms and techniques for addressing these challenges using Lidar sensor data, emphasizing ensemble Kalman filter. The developed predictive collision avoidance algorithm employs the data provided by lidar sensors to track multiple objects and predict their velocities and future positions, enabling the AMR to navigate safely and effectively. A modification to the dynamic windowing approach is introduced to enhance the performance of the collision avoidance system. The overall system architecture encompasses object detection, multi-object tracking, and predictive collision avoidance control. The experimental results, obtained from both simulation and real-world data, demonstrate the effectiveness of the proposed methods in various scenarios, which lays the foundation for future research on global planners, other controllers, and the integration of additional sensors. This thesis contributes to the ongoing development of safe and efficient autonomous systems in complex and dynamic environments.Keywords: autonomous mobile robots, multi-object tracking, predictive collision avoidance, ensemble Kalman filter, lidar sensors
Procedia PDF Downloads 84939 Mature Field Rejuvenation Using Hydraulic Fracturing: A Case Study of Tight Mature Oilfield with Reveal Simulator
Authors: Amir Gharavi, Mohamed Hassan, Amjad Shah
Abstract:
The main characteristics of unconventional reservoirs include low-to ultra low permeability and low-to-moderate porosity. As a result, hydrocarbon production from these reservoirs requires different extraction technologies than from conventional resources. An unconventional reservoir must be stimulated to produce hydrocarbons at an acceptable flow rate to recover commercial quantities of hydrocarbons. Permeability for unconventional reservoirs is mostly below 0.1 mD, and reservoirs with permeability above 0.1 mD are generally considered to be conventional. The hydrocarbon held in these formations naturally will not move towards producing wells at economic rates without aid from hydraulic fracturing which is the only technique to assess these tight reservoir productions. Horizontal well with multi-stage fracking is the key technique to maximize stimulated reservoir volume and achieve commercial production. The main objective of this research paper is to investigate development options for a tight mature oilfield. This includes multistage hydraulic fracturing and spacing by building of reservoir models in the Reveal simulator to model potential development options based on sidetracking the existing vertical well. To simulate potential options, reservoir models have been built in the Reveal. An existing Petrel geological model was used to build the static parts of these models. A FBHP limit of 40bars was assumed to take into account pump operating limits and to maintain the reservoir pressure above the bubble point. 300m, 600m and 900m lateral length wells were modelled, in conjunction with 4, 6 and 8 stages of fracs. Simulation results indicate that higher initial recoveries and peak oil rates are obtained with longer well lengths and also with more fracs and spacing. For a 25year forecast, the ultimate recovery ranging from 0.4% to 2.56% for 300m and 1000m laterals respectively. The 900m lateral with 8 fracs 100m spacing gave the highest peak rate of 120m3/day, with the 600m and 300m cases giving initial peak rates of 110m3/day. Similarly, recovery factor for the 900m lateral with 8 fracs and 100m spacing was the highest at 2.65% after 25 years. The corresponding values for the 300m and 600m laterals were 2.37% and 2.42%. Therefore, the study suggests that longer laterals with 8 fracs and 100m spacing provided the optimal recovery, and this design is recommended as the basis for further study.Keywords: unconventional, resource, hydraulic, fracturing
Procedia PDF Downloads 298938 Visualization of Corrosion at Plate-Like Structures Based on Ultrasonic Wave Propagation Images
Authors: Aoqi Zhang, Changgil Lee Lee, Seunghee Park
Abstract:
A non-contact nondestructive technique using laser-induced ultrasonic wave generation method was applied to visualize corrosion damage at aluminum alloy plate structures. The ultrasonic waves were generated by a Nd:YAG pulse laser, and a galvanometer-based laser scanner was used to scan specific area at a target structure. At the same time, wave responses were measured at a piezoelectric sensor which was attached on the target structure. The visualization of structural damage was achieved by calculating logarithmic values of root mean square (RMS). Damage-sensitive feature was defined as the scattering characteristics of the waves that encounter corrosion damage. The corroded damage was artificially formed by hydrochloric acid. To observe the effect of the location where the corrosion was formed, the both sides of the plate were scanned with same scanning area. Also, the effect on the depth of the corrosion was considered as well as the effect on the size of the corrosion. The results indicated that the damages were successfully visualized for almost cases, whether the damages were formed at the front or back side. However, the damage could not be clearly detected because the depth of the corrosion was shallow. In the future works, it needs to develop signal processing algorithm to more clearly visualize the damage by improving signal-to-noise ratio.Keywords: non-destructive testing, corrosion, pulsed laser scanning, ultrasonic waves, plate structure
Procedia PDF Downloads 300937 The 10-year Risk of Major Osteoporotic and Hip Fractures Among Indonesian People Living with HIV
Authors: Iqbal Pramukti, Mamat Lukman, Hasniatisari Harun, Kusman Ibrahim
Abstract:
Introduction: People living with HIV had a higher risk of osteoporotic fracture than the general population. The purpose of this study was to predict the 10-year risk of fracture among people living with HIV (PLWH) using FRAX™ and to identify characteristics related to the fracture risk. Methodology: This study consisted of 75 subjects. The ten-year probability of major osteoporotic fractures (MOF) and hip fractures was assessed using the FRAX™ algorithm. A cross-tabulation was used to identify the participant’s characteristics related to fracture risk. Results: The overall mean 10-year probability of fracture was 2.4% (1.7) for MOF and 0.4% (0.3) for hip fractures. For MOF score, participants with parents’ hip fracture history, smoking behavior and glucocorticoid use showed a higher MOF score than those who were not (3.1 vs. 2.5; 4.6 vs 2.5; and 3.4 vs 2.5, respectively). For HF score, participants with parents’ hip fracture history, smoking behavior and glucocorticoid use also showed a higher HF score than those who were not (0.5 vs. 0.3; 0.8 vs. 0.3; and 0.5 vs. 0.3, respectively). Conclusions: The 10-year risk of fracture was higher among PLWH with several factors, including the parent’s hip. Fracture history, smoking behavior and glucocorticoid used. Further analysis on determining factors using multivariate regression analysis with a larger sample size is required to confirm the factors associated with the high fracture risk.Keywords: HIV, PLWH, osteoporotic fractures, hip fractures, 10-year risk of fracture, FRAX
Procedia PDF Downloads 49936 Olive Stone Valorization to Its Application on the Ceramic Industry
Authors: M. Martín-Morales, D. Eliche-Quesada, L. Pérez-Villarejo, M. Zamorano
Abstract:
Olive oil is a product of particular importance within the Mediterranean and Spanish agricultural food system, and more specifically in Andalusia, owing to be the world's main production area. Olive oil processing generates olive stones which are dried and cleaned to remove pulp and olive stones fines to produce biofuel characterized to have high energy efficiency in combustion processes. Olive stones fine fraction is not too much appreciated as biofuel, so it is important the study of alternative solutions to be valorized. Some researchers have studied recycling different waste to produce ceramic bricks. The main objective of this study is to investigate the effects of olive stones addition on the properties of fired clay bricks for building construction. Olive stones were substituted by volume (7.5%, 15%, and 25%) to brick raw material in three different sizes (lower than 1 mm, lower than 2 mm and between 1 and 2 mm). In order to obtain comparable results, a series without olive stones was also prepared. The prepared mixtures were compacted in laboratory type extrusion under a pressure of 2.5MPa for rectangular shaped (30 mm x 60 mm x 10 mm). Dried and fired industrial conditions were applied to obtain laboratory brick samples. Mass loss after sintering, bulk density, porosity, water absorption and compressive strength of fired samples were investigated and compared with a sample manufactured without biomass. Results obtained have shown that olive stone addition decreased mechanical properties due to the increase in water absorption, although values tested satisfied the requirements in EN 772-1 about methods of test for masonry units (Part 1: Determination of compressive strength). Finally, important advantages related to the properties of bricks as well as their environmental effects could be obtained with the use of biomass studied to produce ceramic bricks. The increasing of the percentage of olive stones incorporated decreased bulk density and then increased the porosity of bricks. On the one hand, this lower density supposes a weight reduction of bricks to be transported, handled as well as the lightening of building; on the other hand, biomass in clay contributes to auto thermal combustion which involves lower fuel consumption during firing step. Consequently, the production of porous clay bricks using olive stones could reduce atmospheric emissions and improve their life cycle assessment, producing eco-friendly clay bricks.Keywords: clay bricks, olive stones, sustainability, valorization
Procedia PDF Downloads 153935 River Stage-Discharge Forecasting Based on Multiple-Gauge Strategy Using EEMD-DWT-LSSVM Approach
Authors: Farhad Alizadeh, Alireza Faregh Gharamaleki, Mojtaba Jalilzadeh, Houshang Gholami, Ali Akhoundzadeh
Abstract:
This study presented hybrid pre-processing approach along with a conceptual model to enhance the accuracy of river discharge prediction. In order to achieve this goal, Ensemble Empirical Mode Decomposition algorithm (EEMD), Discrete Wavelet Transform (DWT) and Mutual Information (MI) were employed as a hybrid pre-processing approach conjugated to Least Square Support Vector Machine (LSSVM). A conceptual strategy namely multi-station model was developed to forecast the Souris River discharge more accurately. The strategy used herein was capable of covering uncertainties and complexities of river discharge modeling. DWT and EEMD was coupled, and the feature selection was performed for decomposed sub-series using MI to be employed in multi-station model. In the proposed feature selection method, some useless sub-series were omitted to achieve better performance. Results approved efficiency of the proposed DWT-EEMD-MI approach to improve accuracy of multi-station modeling strategies.Keywords: river stage-discharge process, LSSVM, discrete wavelet transform, Ensemble Empirical Decomposition Mode, multi-station modeling
Procedia PDF Downloads 175934 Consumer Welfare in the Platform Economy
Authors: Prama Mukhopadhyay
Abstract:
Starting from transport to food, today’s world platform economy and digital markets have taken over almost every sphere of consumers’ lives. Sellers and buyers are getting connected through platforms, which is acting as an intermediary. It has made consumer’s life easier in terms of time, price, choice and other factors. Having said that, there are several concerns regarding platforms. There are competition law concerns like unfair pricing, deep discounting by the platforms which affect the consumer welfare. Apart from that, the biggest problem is lack of transparency with respect to the business models, how it operates, price calculation, etc. In most of the cases, consumers are unaware of how their personal data are being used. In most of the cases, they are unaware of how algorithm uses their personal data to determine the price of the product or even to show the relevant products using their previous searches. Using personal or non-personal data without consumer’s consent is a huge legal concern. In addition to this, another major issue lies with the question of liability. If a dispute arises, who will be responsible? The seller or the platform? For example, if someone ordered food through a food delivery app and the food was bad, in this situation who will be liable: the restaurant or the food delivery platform? In this paper, the researcher tries to examine the legal concern related to platform economy from the consumer protection and consumer welfare perspectives. The paper analyses the cases from different jurisdictions and approach taken by the judiciaries. The author compares the existing legislation of EU, US and other Asian Countries and tries to highlight the best practices.Keywords: competition, consumer, data, platform
Procedia PDF Downloads 146933 A Development of a Simulation Tool for Production Planning with Capacity-Booking at Specialty Store Retailer of Private Label Apparel Firms
Authors: Erika Yamaguchi, Sirawadee Arunyanrt, Shunichi Ohmori, Kazuho Yoshimoto
Abstract:
In this paper, we suggest a simulation tool to make a decision of monthly production planning for maximizing a profit of Specialty store retailer of Private label Apparel (SPA) firms. Most of SPA firms are fabless and make outsourcing deals for productions with factories of their subcontractors. Every month, SPA firms make a booking for production lines and manpower in the factories. The booking is conducted a few months in advance based on a demand prediction and a monthly production planning at that time. However, the demand prediction is updated month by month, and the monthly production planning would change to meet the latest demand prediction. Then, SPA firms have to change the capacities initially booked within a certain range to suit to the monthly production planning. The booking system is called “capacity-booking”. These days, though it is an issue for SPA firms to make precise monthly production planning, many firms are still conducting the production planning by empirical rules. In addition, it is also a challenge for SPA firms to match their products and factories with considering their demand predictabilities and regulation abilities. In this paper, we suggest a model for considering these two issues. An objective is to maximize a total profit of certain periods, which is sales minus costs of production, inventory, and capacity-booking penalty. To make a better monthly production planning at SPA firms, these points should be considered: demand predictabilities by random trends, previous and next month’s production planning of the target month, and regulation abilities of the capacity-booking. To decide matching products and factories for outsourcing, it is important to consider seasonality, volume, and predictability of each product, production possibility, size, and regulation ability of each factory. SPA firms have to consider these constructions and decide orders with several factories per one product. We modeled these issues as a linear programming. To validate the model, an example of several computational experiments with a SPA firm is presented. We suppose four typical product groups: basic, seasonal (Spring / Summer), seasonal (Fall / Winter), and spot product. As a result of the experiments, a monthly production planning was provided. In the planning, demand predictabilities from random trend are reduced by producing products which are different product types. Moreover, priorities to produce are given to high-margin products. In conclusion, we developed a simulation tool to make a decision of monthly production planning which is useful when the production planning is set every month. We considered the features of capacity-booking, and matching of products and factories which have different features and conditions.Keywords: capacity-booking, SPA, monthly production planning, linear programming
Procedia PDF Downloads 520932 A Method for Identifying Unusual Transactions in E-commerce Through Extended Data Flow Conformance Checking
Authors: Handie Pramana Putra, Ani Dijah Rahajoe
Abstract:
The proliferation of smart devices and advancements in mobile communication technologies have permeated various facets of life with the widespread influence of e-commerce. Detecting abnormal transactions holds paramount significance in this realm due to the potential for substantial financial losses. Moreover, the fusion of data flow and control flow assumes a critical role in the exploration of process modeling and data analysis, contributing significantly to the accuracy and security of business processes. This paper introduces an alternative approach to identify abnormal transactions through a model that integrates both data and control flows. Referred to as the Extended Data Petri net (DPNE), our model encapsulates the entire process, encompassing user login to the e-commerce platform and concluding with the payment stage, including the mobile transaction process. We scrutinize the model's structure, formulate an algorithm for detecting anomalies in pertinent data, and elucidate the rationale and efficacy of the comprehensive system model. A case study validates the responsive performance of each system component, demonstrating the system's adeptness in evaluating every activity within mobile transactions. Ultimately, the results of anomaly detection are derived through a thorough and comprehensive analysis.Keywords: database, data analysis, DPNE, extended data flow, e-commerce
Procedia PDF Downloads 57931 A Development of Holonomic Mobile Robot Using Fuzzy Multi-Layered Controller
Authors: Seungwoo Kim, Yeongcheol Cho
Abstract:
In this paper, a holonomic mobile robot is designed in omnidirectional wheels and an adaptive fuzzy controller is presented for its precise trajectories. A kind of adaptive controller based on fuzzy multi-layered algorithm is used to solve the big parametric uncertainty of motor-controlled dynamic system of 3-wheels omnidirectional mobile robot. The system parameters such as a tracking force are so time-varying due to the kinematic structure of omnidirectional wheels. The fuzzy adaptive control method is able to solve the problems of classical adaptive controller and conventional fuzzy adaptive controllers. The basic idea of new adaptive control scheme is that an adaptive controller can be constructed with parallel combination of robust controllers. This new adaptive controller uses a fuzzy multi-layered architecture which has several independent fuzzy controllers in parallel, each with different robust stability area. Out of several independent fuzzy controllers, the most suited one is selected by a system identifier which observes variations in the controlled system parameter. This paper proposes a design procedure which can be carried out mathematically and systematically from the model of a controlled system. Finally, the good performance of a holonomic mobile robot is confirmed through live tests of the tracking control task.Keywords: fuzzy adaptive control, fuzzy multi-layered controller, holonomic mobile robot, omnidirectional wheels, robustness and stability.
Procedia PDF Downloads 362930 Power System Stability Enhancement Using Self Tuning Fuzzy PI Controller for TCSC
Authors: Salman Hameed
Abstract:
In this paper, a self-tuning fuzzy PI controller (STFPIC) is proposed for thyristor controlled series capacitor (TCSC) to improve power system dynamic performance. In a STFPIC controller, the output scaling factor is adjusted on-line by an updating factor (α). The value of α is determined from a fuzzy rule-base defined on error (e) and change of error (Δe) of the controlled variable. The proposed self-tuning controller is designed using a very simple control rule-base and the most natural and unbiased membership functions (MFs) (symmetric triangles with equal base and 50% overlap with neighboring MFs). The comparative performances of the proposed STFPIC and the standard fuzzy PI controller (FPIC) have been investigated on a multi-machine power system (namely, 4 machine two area system) through detailed non-linear simulation studies using MATLAB/SIMULINK. From the simulation studies it has been found out that for damping oscillations, the performance of the proposed STFPIC is better than that obtained by the standard FPIC. Moreover, the proposed STFPIC as well as the FPIC have been found to be quite effective in damping oscillations over a wide range of operating conditions and are quite effective in enhancing the power carrying capability of the power system significantly.Keywords: genetic algorithm, power system stability, self-tuning fuzzy controller, thyristor controlled series capacitor
Procedia PDF Downloads 424929 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack
Authors: Vincent Andrew Cappellano
Abstract:
In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.Keywords: architecture, resiliency, availability, cyber-attack
Procedia PDF Downloads 110928 Motion Performance Analyses and Trajectory Planning of the Movable Leg-Foot Lander
Authors: Shan Jia, Jinbao Chen, Jinhua Zhou, Jiacheng Qian
Abstract:
In response to the functional limitations of the fixed landers, those are to expand the detection range by the use of wheeled rovers with unavoidable path-repeatability in deep space exploration currently, a movable lander based on the leg-foot walking mechanism is presented. Firstly, a quadruped landing mechanism based on pushrod-damping is proposed. The configuration is of the bionic characteristics such as hip, knee and ankle joints, and the multi-function main/auxiliary buffers based on the crumple-energy absorption and screw-nut mechanism. Secondly, the workspace of the end of the leg-foot mechanism is solved by Monte Carlo method, and the key points on the desired trajectory of the end of the leg-foot mechanism are fitted by cubic spline curve. Finally, an optimal time-jerk trajectory based on weight coefficient is planned and analyzed by an adaptive genetic algorithm (AGA). The simulation results prove the rationality and stability of walking motion of the movable leg-foot lander in the star catalogue. In addition, this research can also provide a technical solution integrating of soft-landing, large-scale inspection and material transfer for future star catalogue exploration, and can even serve as the technical basis for developing the reusable landers.Keywords: motion performance, trajectory planning, movable, leg-foot lander
Procedia PDF Downloads 142927 Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks
Authors: Yao-Hong Tsai
Abstract:
Due to the sensor technology, video surveillance has become the main way for security control in every big city in the world. Surveillance is usually used by governments for intelligence gathering, the prevention of crime, the protection of a process, person, group or object, or the investigation of crime. Many surveillance systems based on computer vision technology have been developed in recent years. Moving target tracking is the most common task for Unmanned Aerial Vehicle (UAV) to find and track objects of interest in mobile aerial surveillance for civilian applications. The paper is focused on vision-based collision avoidance for UAVs by recurrent neural networks. First, images from cameras on UAV were fused based on deep convolutional neural network. Then, a recurrent neural network was constructed to obtain high-level image features for object tracking and extracting low-level image features for noise reducing. The system distributed the calculation of the whole system to local and cloud platform to efficiently perform object detection, tracking and collision avoidance based on multiple UAVs. The experiments on several challenging datasets showed that the proposed algorithm outperforms the state-of-the-art methods.Keywords: unmanned aerial vehicle, object tracking, deep learning, collision avoidance
Procedia PDF Downloads 161926 Phage Therapy as a Potential Solution in the Fight against Antimicrobial Resistance
Authors: Sanjay Shukla
Abstract:
Excessive use of antibiotics is a main problem in the treatment of wounds and other chronic infections and antibiotic treatment is frequently non-curative, thus alternative treatment is necessary. Phage therapy is considered one of the most effective approaches to treat multi-drug resistant bacterial pathogens. Infections caused by Staphylococcus aureus are very efficiently controlled with phage cocktails, containing a different individual phages lysate infecting a majority of known pathogenic S. aureus strains. The aim of current study was to investigate the efficiency of a purified phage cocktail for prophylactic as well as therapeutic application in mouse model and in large animals with chronic septic infection of wounds. A total of 150 sewage samples were collected from various livestock farms. These samples were subjected for the isolation of bacteriophage by double agar layer method. A total of 27 sewage samples showed plaque formation by producing lytic activity against S. aureus in double agar overlay method out of 150 sewage samples. In TEM recovered isolates of bacteriophages showed hexagonal structure with tail fiber. In the bacteriophage (ØVS) had an icosahedral symmetry with the head size 52.20 nm in diameter and long tail of 109 nm. Head and tail were held together by connector and can be classified as a member of the Myoviridae family under the order of Caudovirale. Recovered bacteriophage had shown the antibacterial activity against the S. aureus in vitro. Cocktail (ØVS1, ØVS5, ØVS9 and ØVS 27) of phage lysate were tested to know in vivo antibacterial activity as well as the safety profile. Result of mice experiment indicated that the bacteriophage lysate was very safe, did not show any appearance of abscess formation which indicates its safety in living system. The mice were also prophylactically protected against S. aureus when administered with cocktail of bacteriophage lysate just before the administration of S. aureus which indicates that they are good prophylactic agent. The S. aureus inoculated mice were completely recovered by bacteriophage administration with 100% recovery which was very good as compere to conventional therapy. In present study ten chronic cases of wound were treated with phage lysate and follow up of these cases was done regularly up to ten days (at 0, 5 and 10 d). Result indicated that the six cases out of ten showed complete recovery of wounds within 10 d. The efficacy of bacteriophage therapy was found to be 60% which was very good as compared to the conventional antibiotic therapy in chronic septic wounds infections. Thus, the application of lytic phage in single dose proved to be innovative and effective therapy for treatment of septic chronic wounds.Keywords: phage therapy, phage lysate, antimicrobial resistance, S. aureus
Procedia PDF Downloads 119925 Cellulolytic and Xylanolytic Enzymes from Mycelial Fungi
Authors: T. Sadunishvili, L. Kutateladze, T. Urushadze, R. Khvedelidze, N. Zakariashvili, M. Jobava, G. Kvesitadze
Abstract:
Multiple repeated soil-climatic zones in Georgia determines the diversity of microorganisms. Hundreds of microscopic fungi of different genera have been isolated from different ecological niches, including some extreme environments. Biosynthetic ability of microscopic fungi has been studied. Trichoderma ressei, representative of the Ascomycetes secrete cellulolytic and xylanolytic enzymes that act in synergy to hydrolyze polysaccharide polymers to glucose, xylose and arabinose, which can be fermented to biofuels. The other mesophilic strains producing cellulases are Allesheria terrestris, Chaetomium thermophile, Fusarium oxysporium, Piptoporus betulinus, Penicillium echinulatum, P. purpurogenum, Aspergillus niger, A. wentii, A. versicolor, A. fumigatus etc. In the majority of the cases the cellulases produced by strains of genus Aspergillus usually have high β-glucosidase activity and average endoglucanases levels (with some exceptions), whereas strains representing Trichoderma have high endo enzyme and low β-glucosidase, and hence has limited efficiency in cellulose hydrolysis. Six producers of stable cellulases and xylanases from mesophilic and thermophilic fungi have been selected. By optimization of submerged cultivation conditions, high activities of cellulases and xylanases were obtained. For enzymes purification, their sedimentation by organic solvents such as ethyl alcohol, acetone, isopropanol and by ammonium sulphate in different ratios have been carried out. Best results were obtained with precipitation by ethyl alcohol (1:3.5) and ammonium sulphate. The yields of enzyme according to cellulase activities were 80-85% in both cases. Cellulase activity of enzyme preparation obtained from the strain Trichoderma viride X 33 is 126 U/g, from the strain Penicillium canescence D 85–185U/g and from the strain Sporotrichum pulverulentum T 5-0 110 U/g. Cellulase activity of enzyme preparation obtained from the strain Aspergillus sp. Av10 is 120 U/g, xylanase activity of enzyme preparation obtained from the strain Aspergillus niger A 7-5–1155U/g and from the strain Aspergillus niger Aj 38-1250 U/g. Optimum pH and temperature of operation and thermostability, of the enzyme preparations, were established. The efficiency of hydrolyses of different agricultural residues by the microscopic fungi cellulases has been studied. The glucose yield from the residues as a result of enzymatic hydrolysis is highly determined by the ratio of enzyme to substrate, pH, temperature, and duration of the process. Hydrolysis efficiency was significantly increased as a result of different pretreatment of the residues by different methods. Acknowledgement: The Study was supported by the ISTC project G-2117, funded by Korea.Keywords: cellulase, xylanase, microscopic fungi, enzymatic hydrolysis
Procedia PDF Downloads 394924 Inertial Spreading of Drop on Porous Surfaces
Authors: Shilpa Sahoo, Michel Louge, Anthony Reeves, Olivier Desjardins, Susan Daniel, Sadik Omowunmi
Abstract:
The microgravity on the International Space Station (ISS) was exploited to study the imbibition of water into a network of hydrophilic cylindrical capillaries on time and length scales long enough to observe details hitherto inaccessible under Earth gravity. When a drop touches a porous medium, it spreads as if laid on a composite surface. The surface first behaves as a hydrophobic material, as liquid must penetrate pores filled with air. When contact is established, some of the liquid is drawn into pores by a capillarity that is resisted by viscous forces growing with length of the imbibed region. This process always begins with an inertial regime that is complicated by possible contact pinning. To study imbibition on Earth, time and distance must be shrunk to mitigate gravity-induced distortion. These small scales make it impossible to observe the inertial and pinning processes in detail. Instead, in the International Space Station (ISS), astronaut Luca Parmitano slowly extruded water spheres until they touched any of nine capillary plates. The 12mm diameter droplets were large enough for high-speed GX1050C video cameras on top and side to visualize details near individual capillaries, and long enough to observe dynamics of the entire imbibition process. To investigate the role of contact pinning, a text matrix was produced which consisted nine kinds of porous capillary plates made of gold-coated brass treated with Self-Assembled Monolayers (SAM) that fixed advancing and receding contact angles to known values. In the ISS, long-term microgravity allowed unambiguous observations of the role of contact line pinning during the inertial phase of imbibition. The high-speed videos of spreading and imbibition on the porous plates were analyzed using computer vision software to calculate the radius of the droplet contact patch with the plate and height of the droplet vs time. These observations are compared with numerical simulations and with data that we obtained at the ESA ZARM free-fall tower in Bremen with a unique mechanism producing relatively large water spheres and similarity in the results were observed. The data obtained from the ISS can be used as a benchmark for further numerical simulations in the field.Keywords: droplet imbibition, hydrophilic surface, inertial phase, porous medium
Procedia PDF Downloads 140923 Influence of Solenoid Configuration on Electromagnetic Acceleration of Plunger
Authors: Shreyansh Bharadwaj, Raghavendra Kollipara, Sijoy C. D., R. K. Mittal
Abstract:
Utilizing the Lorentz force to propel an electrically conductive plunger through a solenoid represents a fundamental application in electromagnetism. The parameters of the solenoid significantly influence the force exerted on the plunger, impacting its response. A parametric study has been done to understand the effect of these parameters on the force acting on the plunger. This study is done to determine the most optimal combination of parameters to obtain the fast response. Analysis has been carried out using an algorithm capable of simulating the scenario of a plunger undergoing acceleration within a solenoid. Authors have conducted an analysis focusing on several key configuration parameters of the solenoid. These parameters include the inter-layer gap (in the case of a multi-turn solenoid), different conductor diameters, varying numbers of turns, and diverse numbers of layers. Primary objective of this paper is to discern how alterations in these parameters affect the force applied to the plunger. Through extensive numerical simulations, a dataset has been generated and utilized to construct informative plots. These plots provide visual representations of the relationships between the solenoid configuration parameters and the resulting force exerted on the plunger, which can further be used to deduce scaling laws. This research endeavors to offer valuable insights into optimizing solenoid configurations for enhanced electromagnetic acceleration, thereby contributing to advancements in electromagnetic propulsion technology.Keywords: Lorentz force, solenoid configuration, electromagnetic acceleration, parametric analysis, simulation
Procedia PDF Downloads 51922 Time/Temperature-Dependent Finite Element Model of Laminated Glass Beams
Authors: Alena Zemanová, Jan Zeman, Michal Šejnoha
Abstract:
The polymer foil used for manufacturing of laminated glass members behaves in a viscoelastic manner with temperature dependence. This contribution aims at incorporating the time/temperature-dependent behavior of interlayer to our earlier elastic finite element model for laminated glass beams. The model is based on a refined beam theory: each layer behaves according to the finite-strain shear deformable formulation by Reissner and the adjacent layers are connected via the Lagrange multipliers ensuring the inter-layer compatibility of a laminated unit. The time/temperature-dependent behavior of the interlayer is accounted for by the generalized Maxwell model and by the time-temperature superposition principle due to the Williams, Landel, and Ferry. The resulting system is solved by the Newton method with consistent linearization and the viscoelastic response is determined incrementally by the exponential algorithm. By comparing the model predictions against available experimental data, we demonstrate that the proposed formulation is reliable and accurately reproduces the behavior of the laminated glass units.Keywords: finite element method, finite-strain Reissner model, Lagrange multipliers, generalized Maxwell model, laminated glass, Newton method, Williams-Landel-Ferry equation
Procedia PDF Downloads 433921 From Servicescape to Servicespace: Qualitative Research in a Post-Cartesian Retail Context
Authors: Chris Houliez
Abstract:
This study addresses the complex dynamics of the modern retail environment, focusing on how the ubiquitous nature of mobile communication technologies has reshaped the shopper experience and tested the limits of the conventional "servicescape" concept commonly used to describe retail experiences. The objective is to redefine the conceptualization of retail space by introducing an approach to space that aligns with a retail context where physical and digital interactions are increasingly intertwined. To offer a more shopper-centric understanding of the retail experience, this study draws from phenomenology, particularly Henri Lefebvre’s work on the production of space. The presented protocol differs from traditional methodologies by not making assumptions about what constitutes a retail space. Instead, it adopts a perspective based on Lefebvre’s seminal work, which posits that space is not a three-dimensional container commonly referred to as “servicescape” but is actively produced through shoppers’ spatial practices. This approach allows for an in-depth exploration of the retail experience by capturing the everyday spatial practices of shoppers without preconceived notions of what constitutes a retail space. The designed protocol was tested with eight participants during 209 hours of day-long field trips, immersing the researcher into the shopper's lived experience by combining multiple data collection methods, including participant observation, videography, photography, and both pre-fieldwork and post-fieldwork interviews. By giving equal importance to both locations and connections, this study unpacked various spatial practices that contribute to the production of retail space. These findings highlight the relative inadequacy of some traditional retail space conceptualizations, which often fail to capture the fluid nature of contemporary shopping experiences. The study's emphasis on the customization process, through which shoppers optimize their retail experience by producing a “fully lived retail space,” offers a more comprehensive understanding of consumer shopping behavior in the digital age. In conclusion, this research presents a significant shift in the conceptualization of retail space. By employing a phenomenological approach rooted in Lefebvre’s theory, the study provides a more efficient framework to understand the retail experience in the age of mobile communication technologies. Although this research is limited by its small sample size and the demographic profile of participants, it offers valuable insights into the spatial practices of modern shoppers and their implications for retail researchers and retailers alike.Keywords: shopper behavior, mobile telecommunication technologies, qualitative research, servicescape, servicespace
Procedia PDF Downloads 25920 An Automatic Speech Recognition of Conversational Telephone Speech in Malay Language
Authors: M. Draman, S. Z. Muhamad Yassin, M. S. Alias, Z. Lambak, M. I. Zulkifli, S. N. Padhi, K. N. Baharim, F. Maskuriy, A. I. A. Rahim
Abstract:
The performance of Malay automatic speech recognition (ASR) system for the call centre environment is presented. The system utilizes Kaldi toolkit as the platform to the entire library and algorithm used in performing the ASR task. The acoustic model implemented in this system uses a deep neural network (DNN) method to model the acoustic signal and the standard (n-gram) model for language modelling. With 80 hours of training data from the call centre recordings, the ASR system can achieve 72% of accuracy that corresponds to 28% of word error rate (WER). The testing was done using 20 hours of audio data. Despite the implementation of DNN, the system shows a low accuracy owing to the varieties of noises, accent and dialect that typically occurs in Malaysian call centre environment. This significant variation of speakers is reflected by the large standard deviation of the average word error rate (WERav) (i.e., ~ 10%). It is observed that the lowest WER (13.8%) was obtained from recording sample with a standard Malay dialect (central Malaysia) of native speaker as compared to 49% of the sample with the highest WER that contains conversation of the speaker that uses non-standard Malay dialect.Keywords: conversational speech recognition, deep neural network, Malay language, speech recognition
Procedia PDF Downloads 323919 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 44918 Evaluating Machine Learning Techniques for Activity Classification in Smart Home Environments
Authors: Talal Alshammari, Nasser Alshammari, Mohamed Sedky, Chris Howard
Abstract:
With the widespread adoption of the Internet-connected devices, and with the prevalence of the Internet of Things (IoT) applications, there is an increased interest in machine learning techniques that can provide useful and interesting services in the smart home domain. The areas that machine learning techniques can help advance are varied and ever-evolving. Classifying smart home inhabitants’ Activities of Daily Living (ADLs), is one prominent example. The ability of machine learning technique to find meaningful spatio-temporal relations of high-dimensional data is an important requirement as well. This paper presents a comparative evaluation of state-of-the-art machine learning techniques to classify ADLs in the smart home domain. Forty-two synthetic datasets and two real-world datasets with multiple inhabitants are used to evaluate and compare the performance of the identified machine learning techniques. Our results show significant performance differences between the evaluated techniques. Such as AdaBoost, Cortical Learning Algorithm (CLA), Decision Trees, Hidden Markov Model (HMM), Multi-layer Perceptron (MLP), Structured Perceptron and Support Vector Machines (SVM). Overall, neural network based techniques have shown superiority over the other tested techniques.Keywords: activities of daily living, classification, internet of things, machine learning, prediction, smart home
Procedia PDF Downloads 358917 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals
Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam
Abstract:
The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study
Procedia PDF Downloads 322916 Drying Kinects of Soybean Seeds
Authors: Amanda Rithieli Pereira Dos Santos, Rute Quelvia De Faria, Álvaro De Oliveira Cardoso, Anderson Rodrigo Da Silva, Érica Leão Fernandes Araújo
Abstract:
The study of the kinetics of drying has great importance for the mathematical modeling, allowing to know about the processes of transference of heat and mass between the products and to adjust dryers managing new technologies for these processes. The present work had the objective of studying the kinetics of drying of soybean seeds and adjusting different statistical models to the experimental data varying cultivar and temperature. Soybean seeds were pre-dried in a natural environment in order to reduce and homogenize the water content to the level of 14% (b.s.). Then, drying was carried out in a forced air circulation oven at controlled temperatures of 38, 43, 48, 53 and 58 ± 1 ° C, using two soybean cultivars, BRS 8780 and Sambaíba, until reaching a hygroscopic equilibrium. The experimental design was completely randomized in factorial 5 x 2 (temperature x cultivar) with 3 replicates. To the experimental data were adjusted eleven statistical models used to explain the drying process of agricultural products. Regression analysis was performed using the least squares Gauss-Newton algorithm to estimate the parameters. The degree of adjustment was evaluated from the analysis of the coefficient of determination (R²), the adjusted coefficient of determination (R² Aj.) And the standard error (S.E). The models that best represent the drying kinetics of soybean seeds are those of Midilli and Logarítmico.Keywords: curve of drying seeds, Glycine max L., moisture ratio, statistical models
Procedia PDF Downloads 630915 Improving Predictions of Coastal Benthic Invertebrate Occurrence and Density Using a Multi-Scalar Approach
Authors: Stephanie Watson, Fabrice Stephenson, Conrad Pilditch, Carolyn Lundquist
Abstract:
Spatial data detailing both the distribution and density of functionally important marine species are needed to inform management decisions. Species distribution models (SDMs) have proven helpful in this regard; however, models often focus only on species occurrences derived from spatially expansive datasets and lack the resolution and detail required to inform regional management decisions. Boosted regression trees (BRT) were used to produce high-resolution SDMs (250 m) at two spatial scales predicting probability of occurrence, abundance (count per sample unit), density (count per km2) and uncertainty for seven coastal seafloor taxa that vary in habitat usage and distribution to examine prediction differences and implications for coastal management. We investigated if small scale regionally focussed models (82,000 km2) can provide improved predictions compared to data-rich national scale models (4.2 million km2). We explored the variability in predictions across model type (occurrence vs abundance) and model scale to determine if specific taxa models or model types are more robust to geographical variability. National scale occurrence models correlated well with broad-scale environmental predictors, resulting in higher AUC (Area under the receiver operating curve) and deviance explained scores; however, they tended to overpredict in the coastal environment and lacked spatially differentiated detail for some taxa. Regional models had lower overall performance, but for some taxa, spatial predictions were more differentiated at a localised ecological scale. National density models were often spatially refined and highlighted areas of ecological relevance producing more useful outputs than regional-scale models. The utility of a two-scale approach aids the selection of the most optimal combination of models to create a spatially informative density model, as results contrasted for specific taxa between model type and scale. However, it is vital that robust predictions of occurrence and abundance are generated as inputs for the combined density model as areas that do not spatially align between models can be discarded. This study demonstrates the variability in SDM outputs created over different geographical scales and highlights implications and opportunities for managers utilising these tools for regional conservation, particularly in data-limited environments.Keywords: Benthic ecology, spatial modelling, multi-scalar modelling, marine conservation.
Procedia PDF Downloads 79914 A Low Cost Non-Destructive Grain Moisture Embedded System for Food Safety and Quality
Authors: Ritula Thakur, Babankumar S. Bansod, Puneet Mehta, S. Chatterji
Abstract:
Moisture plays an important role in storage, harvesting and processing of food grains and related agricultural products. It is an important characteristic of most agricultural products for maintenance of quality. Accurate knowledge of the moisture content can be of significant value in maintaining quality and preventing contamination of cereal grains. The present work reports the design and development of microcontroller based low cost non-destructive moisture meter, which uses complex impedance measurement method for moisture measurement of wheat using parallel plate capacitor arrangement. Moisture can conveniently be sensed by measuring the complex impedance using a small parallel-plate capacitor sensor filled with the kernels in-between the two plates of sensor, exciting the sensor at 30 KHz and 100 KHz frequencies. The effects of density and temperature variations were compensated by providing suitable compensations in the developed algorithm. The results were compared with standard dry oven technique and the developed method was found to be highly accurate with less than 1% error. The developed moisture meter is low cost, highly accurate, non-destructible method for determining the moisture of grains utilizing the fast computing capabilities of microcontroller.Keywords: complex impedance, moisture content, electrical properties, safety of food
Procedia PDF Downloads 463913 The Study of Difficulties of Understanding Idiomatic Expressions Encountered by Translators 2021
Authors: Mohamed Elmogbail
Abstract:
The present study aimed at investigating difficulties those Translators encounter in understanding idiomatic expressions between Arabic and English languages. To achieve this goal, the researcher raised the three questions are:(1) What are the major difficulties that translators encounter in translating idiomatic expressions? (2) What factors cause such difficulties that translators encountered in translating idiomatic expressions? (3) What are the possible techniques that should be followed to overcome these difficulties? To answer these questions, the researcher designed questionnaire Table (2) and mentioned tables related to Test Show the second question in the study is about the factors that stand behind the challenges. Translators encounter while translating idiomatic expressions. The translators asked Provided the following factors:1- Because of lack of exposure to the source culture, they do not know the connotations of the cultural words that are related to the environment, food, folklore 2- Misusing dictionaries made the participants unable to find a proper target language idiomatic expression. 3-Lack of using idiomatic expressions in daily life. Table (3): (Questionnaire) Results to the table (3) Questions Of the study are About suggestions that can be inferred to handle these challenges. The questioned translators provided the following solutions:1- translators must be exposed to source language culture, including religion, habits, and traditions.2- translators should also be exposed to source language idiomatic expressions by introducing English culture in textbooks and through participating in extensive English culture courses.3- translators should be familiar with the differences between source and target language cultures.4- translators should avoid literal translation that results in most cases in wrong or poor translation.5- Schools, universities, and institutions should introduce translators to English culture.6- translators should participate in cultural workshops at universities.7- translators should try to use idiomatic expressions in everyday situations.8- translators should read more idiomatic expressions books. And researcher also designed a translation test consisted of 40 excerpts given to a random sample of 100 Translators in Khartoum capital of Sudan to translate them. After Collected data for the study, the researcher proceeded to a more detailed analysis, the methodology used in the analysis of idiomatic expressions Is empirical and descriptive. This study is qualitative by nature, but the quantitative method used the analysis of the data. Some figure and statistics are used, such as (statistical package for the social sciences). The researcher calculated the percentage proportion of each translation expressions. And compared them to each other. The finding of the study showed that most translations are inadequate as the translators faced difficulties while communication, these difficulties were mostly due to their unfamiliarity with idiomatic expressions producing improper equivalence in the communication, and not being able to use translation techniques as required, and resorted to literal translation, furthermore, the study recommended that more comprehensive studies to executed on translating idiomatic expressions to enrich the translation field.Keywords: translation, translators, idioms., expressions
Procedia PDF Downloads 148912 Application of GA Optimization in Analysis of Variable Stiffness Composites
Authors: Nasim Fallahi, Erasmo Carrera, Alfonso Pagani
Abstract:
Variable angle tow describes the fibres which are curvilinearly steered in a composite lamina. Significantly, stiffness tailoring freedom of VAT composite laminate can be enlarged and enabled. Composite structures with curvilinear fibres have been shown to improve the buckling load carrying capability in contrast with the straight laminate composites. However, the optimal design and analysis of VAT are faced with high computational efforts due to the increasing number of variables. In this article, an efficient optimum solution has been used in combination with 1D Carrera’s Unified Formulation (CUF) to investigate the optimum fibre orientation angles for buckling analysis. The particular emphasis is on the LE-based CUF models, which provide a Lagrange Expansions to address a layerwise description of the problem unknowns. The first critical buckling load has been considered under simply supported boundary conditions. Special attention is lead to the sensitivity of buckling load corresponding to the fibre orientation angle in comparison with the results which obtain through the Genetic Algorithm (GA) optimization frame and then Artificial Neural Network (ANN) is applied to investigate the accuracy of the optimized model. As a result, numerical CUF approach with an optimal solution demonstrates the robustness and computational efficiency of proposed optimum methodology.Keywords: beam structures, layerwise, optimization, variable stiffness
Procedia PDF Downloads 145911 Functional Surfaces and Edges for Cutting and Forming Tools Created Using Directed Energy Deposition
Authors: Michal Brazda, Miroslav Urbanek, Martina Koukolikova
Abstract:
This work focuses on the development of functional surfaces and edges for cutting and forming tools created through the Directed Energy Deposition (DED) technology. In the context of growing challenges in modern engineering, additive technologies, especially DED, present an innovative approach to manufacturing tools for forming and cutting. One of the key features of DED is its ability to precisely and efficiently deposit Fully dense metals from powder feedstock, enabling the creation of complex geometries and optimized designs. Gradually, it becomes an increasingly attractive choice for tool production due to its ability to achieve high precision while simultaneously minimizing waste and material costs. Tools created using DED technology gain significant durability through the utilization of high-performance materials such as nickel alloys and tool steels. For high-temperature applications, Nimonic 80A alloy is applied, while for cold applications, M2 tool steel is used. The addition of ceramic materials, such as tungsten carbide, can significantly increase the tool's resistance. The introduction of functionally graded materials is a significant contribution, opening up new possibilities for gradual changes in the mechanical properties of the tool and optimizing its performance in different sections according to specific requirements. In this work, you will find an overview of individual applications and their utilization in the industry. Microstructural analyses have been conducted, providing detailed insights into the structure of individual components alongside examinations of the mechanical properties and tool life. These analyses offer a deeper understanding of the efficiency and reliability of the created tools, which is a key element for successful development in the field of cutting and forming tools. The production of functional surfaces and edges using DED technology can result in financial savings, as the entire tool doesn't have to be manufactured from expensive special alloys. The tool can be made from common steel, onto which a functional surface from special materials can be applied. Additionally, it allows for tool repairs after wear and tear, eliminating the need for producing a new part and contributing to an overall cost while reducing the environmental footprint. Overall, the combination of DED technology, functionally graded materials, and verified technologies collectively set a new standard for innovative and efficient development of cutting and forming tools in the modern industrial environment.Keywords: additive manufacturing, directed energy deposition, DED, laser, cutting tools, forming tools, steel, nickel alloy
Procedia PDF Downloads 51