Search results for: network density
149 Early Registration : Criterion to Improve Communication-Inter Agents in Mobile-IP Protocol
Authors: Hossam el-ddin Mostafa, Pavel Čičak
Abstract:
In IETF RFC 2002, Mobile-IP was developed to enable Laptobs to maintain Internet connectivity while moving between subnets. However, the packet loss that comes from switching subnets arises because network connectivity is lost while the mobile host registers with the foreign agent and this encounters large end-to-end packet delays. The criterion to initiate a simple and fast full-duplex connection between the home agent and foreign agent, to reduce the roaming duration, is a very important issue to be considered by a work in this paper. State-transition Petri-Nets of the modeling scenario-based CIA: communication inter-agents procedure as an extension to the basic Mobile-IP registration process was designed and manipulated to describe the system in discrete events. The heuristic of configuration file during practical Setup session for registration parameters, on Cisco platform Router-1760 using IOS 12.3 (15)T and TFTP server S/W is created. Finally, stand-alone performance simulations from Simulink Matlab, within each subnet and also between subnets, are illustrated for reporting better end-toend packet delays. Results verified the effectiveness of our Mathcad analytical manipulation and experimental implementation. It showed lower values of end-to-end packet delay for Mobile-IP using CIA procedure-based early registration. Furthermore, it reported packets flow between subnets to improve losses between subnets.Keywords: Cisco configuration, handoff, Mobile-IP, packetdelay, Petri-Nets, registration process, Simulink
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391148 Managing Iterations in Product Design and Development
Authors: K. Aravindhan, Trishit Bandyopadhyay, Mahesh Mehendale, Supriya Kumar De
Abstract:
The inherent iterative nature of product design and development poses significant challenge to reduce the product design and development time (PD). In order to shorten the time to market, organizations have adopted concurrent development where multiple specialized tasks and design activities are carried out in parallel. Iterative nature of work coupled with the overlap of activities can result in unpredictable time to completion and significant rework. Many of the products have missed the time to market window due to unanticipated or rather unplanned iteration and rework. The iterative and often overlapped processes introduce greater amounts of ambiguity in design and development, where the traditional methods and tools of project management provide less value. In this context, identifying critical metrics to understand the iteration probability is an open research area where significant contribution can be made given that iteration has been the key driver of cost and schedule risk in PD projects. Two important questions that the proposed study attempts to address are: Can we predict and identify the number of iterations in a product development flow? Can we provide managerial insights for a better control over iteration? The proposal introduces the concept of decision points and using this concept intends to develop metrics that can provide managerial insights into iteration predictability. By characterizing the product development flow as a network of decision points, the proposed research intends to delve further into iteration probability and attempts to provide more clarity.
Keywords: Decision Points, Iteration, Product Design, Rework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2192147 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations
Authors: Yehjune Heo
Abstract:
Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.
Keywords: Anti-spoofing, CNN, fingerprint recognition, GAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594146 A Piscan Ulcerative Aeromonas Infection
Authors: Ibrahim M. S. Shnawa, Bashar A. H. E. Alsadi, Kalida K. Alniaem
Abstract:
In the immunologic sense, clinical infection is a state of failure of the immune system to combat the pathogenic weapon of the bacteria invading the host. A motile gram negative vibroid organism associated with marked mono and poly nuclear cell responses was traced during the examination of a clinical material from an infected common carp Cyprinus carpio. On primary plate culture, growth was shown to be pure, dense population of an Aeromonas-like colony morphotype. The pure isolate was found to be; Aerobic, facultatively anaerobic, non-halophilic, grew at 0C, and 37C, oxidase positive utilizes glucose through fermentative pathway, resist 0/129 and novobiocin, produces alanine and lysine decarboxylases but non-producing ornithine dehydrolases. Tests for the in vitro determinants of pathogenicity has shown to be; Betahaemolytic onto blood agar, gelatinase, casienase and amylase producer. Three in vivo determinants of pathogenicity were tested as, the lethal dose fifty, the pathogenesis and pathogenicity. It was evident that 0.1 milliliter of the causal bacterial cell suspension of a density 1 x 107 CFU/ml injected intramuscularly into an average of 100gms fish toke five days incubation period, then at the day six morbidity and mortality were initiated. LD50 was recorded at the day 12 post-infection. Use of an LD50 doses to study the pathogenicity, reveals mononuclear and polynuclear cell responses, on examining the stained direct films of the clinical materials from the experimentally infected fish. Re-isolation tests confirm that the reisolant is same. The course of the infection in natural case was shown manifestation of; skin ulceration, haemorrhage and descaling. On evisceration, the internal organs were shown; congestion in the intestines, spleen and, air sacs. The induced infection showed a milder form of these manifestations. The grading of the virulence of this organism was virulent causing chronic course of infections as indicated from the pathogenesis and pathogenicity studies. Thus the infectious bacteria were consistent with Aeromonas hydrophila, and the infection was chronic.Keywords: Piscan, inflammatory respnonse, pure culture, pathogen, chronic, infection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2042145 Applying Case-Based Reasoning in Supporting Strategy Decisions
Authors: S. M. Seyedhosseini, A. Makui, M. Ghadami
Abstract:
Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.
Keywords: Case based reasoning, Genetic algorithm, Groupdecision making, Product management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2174144 An Application of Path Planning Algorithms for Autonomous Inspection of Buried Pipes with Swarm Robots
Authors: Richard Molyneux, Christopher Parrott, Kirill Horoshenkov
Abstract:
This paper aims to demonstrate how various algorithms can be implemented within swarms of autonomous robots to provide continuous inspection within underground pipeline networks. Current methods of fault detection within pipes are costly, time consuming and inefficient. As such, solutions tend toward a more reactive approach, repairing faults, as opposed to proactively seeking leaks and blockages. The paper presents an efficient inspection method, showing that autonomous swarm robotics is a viable way of monitoring underground infrastructure. Tailored adaptations of various Vehicle Routing Problems (VRP) and path-planning algorithms provide a customised inspection procedure for complicated networks of underground pipes. The performance of multiple algorithms is compared to determine their effectiveness and feasibility. Notable inspirations come from ant colonies and stigmergy, graph theory, the k-Chinese Postman Problem ( -CPP) and traffic theory. Unlike most swarm behaviours which rely on fast communication between agents, underground pipe networks are a highly challenging communication environment with extremely limited communication ranges. This is due to the extreme variability in the pipe conditions and relatively high attenuation of acoustic and radio waves with which robots would usually communicate. This paper illustrates how to optimise the inspection process and how to increase the frequency with which the robots pass each other, without compromising the routes they are able to take to cover the whole network.
Keywords: Autonomous inspection, buried pipes, stigmergy, swarm intelligence, vehicle routing problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1016143 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration
Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith
Abstract:
Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.Keywords: Multimodal image registration, GAN, cycle consistency, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811142 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life due to the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or COVID-19 induced pneumonia. The early prediction and classification of such lung diseases help reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans are pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publicly available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.
Keywords: CT scans, COVID-19, deep learning, image processing, pneumonia, lung disease.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610141 Conflation Methodology Applied to Flood Recovery
Authors: E. L. Suarez, D. E. Meeroff, Y. Yong
Abstract:
Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.
Keywords: Community resilience, conflation, flood risk, nuisance flooding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 144140 Experimental and Numerical Study of the Effect of Lateral Wind on the Feeder Airship
Authors: A. Suñol, D. Vucinic, S.Vanlanduit, T. Markova, A. Aksenov, I. Moskalyov
Abstract:
Feeder is one of the airships of the Multibody Advanced Airship for Transport (MAAT) system, under development within the EU FP7 project. MAAT is based on a modular concept composed of two different parts that have the possibility to join; respectively they are the so-called Cruiser and Feeder, designed on the lighter than air principle. Feeder, also named ATEN (Airship Transport Elevator Network), is the smaller one which joins the bigger one, Cruiser, also named PTAH (Photovoltaic modular Transport Airship for High altitude),envisaged to happen at 15km altitude. During the MAAT design phase, the aerodynamic studies of the both airships and their interactions are analyzed. The objective of these studies is to understand the aerodynamic behavior of all the preselected configurations, as an important element in the overall MAAT system design. The most of these configurations are only simulated by CFD, while the most feasible one is experimentally analyzed in order to validate and thrust the CFD predictions. This paper presents the numerical and experimental investigation of the Feeder “conical like" shape configuration. The experiments are focused on the aerodynamic force coefficients and the pressure distribution over the Feeder outer surface, while the numerical simulation cover also the analysis of the velocity and pressure distribution. Finally, the wind tunnel experiment is compared with its CFD model in order to validate such specific simulations with respective experiments and to better understand the difference between the wind tunnel and in-flight circumstances.
Keywords: MAAT project Feeder, CFD simulations, wind tunnel experiments, lateral wind influence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2573139 A Survey of WhatsApp as a Tool for Instructor-Learner Dialogue, Learner-Content Dialogue, and Learner-Learner Dialogue
Authors: Ebrahim Panah, Muhammad Yasir Babar
Abstract:
Thanks to the development of online technology and social networks, people are able to communicate as well as learn. WhatsApp is a popular social network which is growingly gaining popularity. This app can be used for communication as well as education. It can be used for instructor-learner, learner-learner, and learner-content interactions; however, very little knowledge is available on these potentials of WhatsApp. The current study was undertaken to investigate university students’ perceptions of WhatsApp used as a tool for instructor-learner dialogue, learner-content dialogue, and learner-learner dialogue. The study adopted a survey approach and distributed the questionnaire developed by Google Forms to 54 (11 males and 43 females) university students. The obtained data were analyzed using SPSS version 20. The result of data analysis indicates that students have positive attitudes towards WhatsApp as a tool for Instructor-Learner Dialogue: it easy to reach the lecturer (4.07), the instructor gives me valuable feedback on my assignment (4.02), the instructor is supportive during course discussion and offers continuous support with the class (4.00). Learner-Content Dialogue: WhatsApp allows me to academically engage with lecturers anytime, anywhere (4.00), it helps to send graphics such as pictures or charts directly to the students (3.98), it also provides out of class, extra learning materials and homework (3.96), and Learner-Learner Dialogue: WhatsApp is a good tool for sharing knowledge with others (4.09), WhatsApp allows me to academically engage with peers anytime, anywhere (4.07), and we can interact with others through the use of group discussion (4.02). It was also found that there are significant positive correlations between students’ perceptions of Instructor-Learner Dialogue (ILD), Learner-Content Dialogue (LCD), Learner-Learner Dialogue (LLD) and WhatsApp Application in classroom. The findings of the study have implications for lectures, policy makers and curriculum developers.
Keywords: Instructor-learner dialogue, learners-contents dialogue, learner-learner dialogue, WhatsApp.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 685138 A Study on Integrated Performance of Tap-Changing Transformer and SVC in Association with Power System Voltage Stability
Authors: Mahmood Reza Shakarami, Reza Sedaghati
Abstract:
Electricity market activities and a growing demand for electricity have led to heavily stressed power systems. This requires operation of the networks closer to their stability limits. Power system operation is affected by stability related problems, leading to unpredictable system behavior. Voltage stability refers to the ability of a power system to sustain appropriate voltage levels through large and small disturbances. Steady-state voltage stability is concerned with limits on the existence of steady-state operating points for the network. FACTS devices can be utilized to increase the transmission capacity, the stability margin and dynamic behavior or serve to ensure improved power quality. Their main capabilities are reactive power compensation, voltage control and power flow control. Among the FACTS controllers, Static Var Compensator (SVC) provides fast acting dynamic reactive compensation for voltage support during contingency events. In this paper, voltage stability assessment with appropriate representations of tap-changer transformers and SVC is investigated. Integrating both of these devices is the main topic of this paper. Effect of the presence of tap-changing transformers on static VAR compensator controller parameters and ratings necessary to stabilize load voltages at certain values are highlighted. The interrelation between transformer off nominal tap ratios and the SVC controller gains and droop slopes and the SVC rating are found. P-V curves are constructed to calculate loadability margins.
Keywords: SVC, voltage stability, P-V curve, reactive power, tap changing transformer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3022137 Construction Unit Rate Factor Modelling Using Neural Networks
Authors: Balimu Mwiya, Mundia Muya, Chabota Kaliba, Peter Mukalula
Abstract:
Factors affecting construction unit cost vary depending on a country’s political, economic, social and technological inclinations. Factors affecting construction costs have been studied from various perspectives. Analysis of cost factors requires an appreciation of a country’s practices. Identified cost factors provide an indication of a country’s construction economic strata. The purpose of this paper is to identify the essential factors that affect unit cost estimation and their breakdown using artificial neural networks. Twenty five (25) identified cost factors in road construction were subjected to a questionnaire survey and employing SPSS factor analysis the factors were reduced to eight. The 8 factors were analysed using neural network (NN) to determine the proportionate breakdown of the cost factors in a given construction unit rate. NN predicted that political environment accounted 44% of the unit rate followed by contractor capacity at 22% and financial delays, project feasibility and overhead & profit each at 11%. Project location, material availability and corruption perception index had minimal impact on the unit cost from the training data provided. Quantified cost factors can be incorporated in unit cost estimation models (UCEM) to produce more accurate estimates. This can create improvements in the cost estimation of infrastructure projects and establish a benchmark standard to assist the process of alignment of work practises and training of new staff, permitting the on-going development of best practises in cost estimation to become more effective.
Keywords: Construction cost factors, neural networks, roadworks, Zambian Construction Industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3826136 A Research on the Coordinated Development of Chengdu-Chongqing Economic Circle Under the Background of New Urbanization
Authors: Deng Tingting
Abstract:
The coordinated and integrated development of regions is an inevitable requirement for China to move towards high-quality sustainable development. As one of the regions with the best economic foundation and the strongest economic strength in the western China, it is a typical area with national importance and strong network connection characteristics in terms of the comprehensive effect of linking the inland hinterland and connecting the western and national urban networks. The integrated development of the Chengdu-Chongqing economic circle is of great strategic significance for the rapid and high-quality development of the western region. In the context of new urbanization, this paper takes 16 urban units within the economic circle as the research object, based on the 5-year panel data of population, regional economy and spatial construction and development from 2016 to 2020, using the entropy method and Theil index to analyze the three target layers, and cause analysis. The research shows that there are temporal and spatial differences in the Chengdu-Chongqing economic circle, and there are significant differences between the core city and the surrounding cities. Therefore, by reforming and innovating the regional coordinated development mechanism, breaking administrative barriers, and strengthening the "polar nucleus" radiation function to release the driving force for economic development, especially in the gully areas of economic development belts, will not only promote the coordinated development of internal regions, but also promote the coordinated and sustainable development of the western region and toward a high-quality development path.
Keywords: Chengdu-Chongqing economic circle, new urbanization, coordinated regional development, Theil Index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202135 The Impact of Financial System on Mixed Use Development – Unrest in UK and Sense of Safety in Mixed Use Development
Authors: Tamara Kelly
Abstract:
The past decade has witnessed a good opportunities for city development schemes in UK. The government encouraged restoration of city centers to comprise mixed use developments with high density residential apartments. Investments in regeneration areas were doing well according to the analyses of Property Databank (IPD). However, more recent analysis by IPD has shown that since 2007, property in regeneration areas has been more vulnerable to the market downturn than other types of investment property. The early stages of a property market downturn may be felt most in regeneration where funding, investor confidence and occupier demand would dissipate because the sector was considered more marginal or risky when development costs rise. Moreover, the Bank of England survey shows that lenders have sequentially tightened the availability of credit for commercial real estate since mid-2007. A sharp reduction in the willingness of banks to lend on commercial property was recorded. The credit crunch has already affected commercial property but its impact has been particularly severe in certain kinds of properties where residential developments are extremely difficult, in particular city centre apartments and buy-to-let markets. Commercial property – retail, industrial leisure and mixed use were also pressed, in Birmingham; tens of mixed use plots were built to replace old factories in the heart of the city. The purpose of these developments was to enable young professionals to work and live in same place. Thousands of people lost their jobs during the recession, moreover lending was more difficult and the future of many developments is unknown. The recession casts its shadow upon the society due to cuts in public spending by government, Inflation, rising tuition fees and high rise in unemployment generated anger and hatred was spreading among youth causing vandalism and riots in many cities. Recent riots targeted many mixed used development in the UK where banks, shops, restaurants and big stores were robbed and set into fire leaving residents with horror and shock. This paper examines the impact of the recession and riots on mixed use development in UK.Keywords: Diversity, mixed use development, outdoor comfort, public realm, safe places, safety by design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623134 Modeling and Simulation of Overcurrent and Earth Fault Relay with Inverse Definite Minimum Time
Authors: Win Win Tun, Han Su Yin, Ohn Zin Lin
Abstract:
Transmission networks are an important part of an electric power system. The transmission lines not only have high power transmission capacity but also they are prone of larger magnitudes. Different types of faults occur in transmission lines such as single line to ground (L-G) fault, double line to ground (L-L-G) fault, line to line (L-L) fault and three phases (L-L-L) fault. These faults are needed to be cleared quickly in order to reduce damage caused to the system and they have high impact on the electrical power system equipment’s which are connected in transmission line. The main fault in transmission line is L-G fault. Therefore, protection relays are needed to protect transmission line. Overcurrent and earth fault relay is an important relay used to protect transmission lines, distribution feeders, transformers and bus couplers etc. Sometimes these relays can be used as main protection or backup protection. The modeling of protection relays is important to indicate the effects of network parameters and configurations on the operation of relays. Therefore, the modeling of overcurrent and earth fault relay is described in this paper. The overcurrent and earth fault relays with standard inverse definite minimum time are modeled and simulated by using MATLAB/Simulink software. The developed model was tested with L-G, L-L-G, L-L and L-L-L faults with various fault locations and fault resistance (0.001Ω). The simulation results are obtained by MATLAB software which shows the feasibility of analysis of transmission line protection with overcurrent and earth fault relay.
Keywords: Transmission line, overcurrent and earth fault relay, standard inverse definite minimum time, various faults, MATLAB Software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994133 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning
Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar
Abstract:
As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling. The research proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling. The paper concludes the challenges and improvement directions for Deep Reinforcement Learning-based resource scheduling algorithms.
Keywords: Resource scheduling, deep reinforcement learning, distributed system, artificial intelligence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 496132 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.
Keywords: Conditional Generative Adversarial Net, market and credit risk management, neural network, time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1199131 Effect of Soil Tillage System upon the Soil Properties, Weed Control, Quality and Quantity Yield in Some Arable Crops
Authors: T Rusu, P I Moraru, I Bogdan, A I Pop, M L Sopterean
Abstract:
The paper presents the influence of the conventional ploughing tillage technology in comparison with the minimum tillage, upon the soil properties, weed control and yield in the case of maize (Zea mays L.), soya-bean (Glycine hispida L.) and winter wheat (Triticum aestivum L.) in a three years crop rotation. A research has been conducted at the University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca, Romania. The use of minimum soil tillage systems within a three years rotation: maize, soya-bean, wheat favorites the rise of the aggregates hydro stability with 5.6-7.5% on a 0-20 cm depth and 5-11% on 20-30 cm depth. The minimum soil tillage systems – paraplow, chisel or rotary grape – are polyvalent alternatives for basic preparation, germination bed preparation and sowing, for fields and crops with moderate loose requirements being optimized technologies for: soil natural fertility activation and rationalization, reduction of erosion, increasing the accumulation capacity for water and realization of sowing in the optimal period. The soil tillage system influences the productivity elements of cultivated species and finally the productions thus obtained. Thus, related to conventional working system, the productions registered in minimum tillage working represented 89- 97% in maize, 103-112% in soya-bean, 93-99% in winter-wheat. The results of investigations showed that the yield is a conclusion soil tillage systems influence on soil properties, plant density assurance and on weed control. Under minimum tillage systems in the case of winter weat as an option for replacing classic ploughing, the best results in terms of quality indices were obtained from version worked with paraplow, followed by rotary harrow and chisel. At variants worked with paraplow were obtained quality indices close to those of the variant worked with plow, and protein and gluten content was even higher. At Ariesan variety, highest protein content, 12.50% and gluten, 28.6% was obtained for the variant paraplow.Keywords: Minimum tillage, soil properties, yields quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919130 Production of Pig Iron by Smelting of Blended Pre-Reduced Titaniferous Magnetite Ore and Hematite Ore Using Lean Grade Coal
Authors: Bitan Kumar Sarkar, Akashdeep Agarwal, Rajib Dey, Gopes Chandra Das
Abstract:
The rapid depletion of high-grade iron ore (Fe2O3) has gained attention on the use of other sources of iron ore. Titaniferous magnetite ore (TMO) is a special type of magnetite ore having high titania content (23.23% TiO2 present in this case). Due to high TiO2 content and high density, TMO cannot be treated by the conventional smelting reduction. In this present work, the TMO has been collected from high-grade metamorphic terrain of the Precambrian Chotanagpur gneissic complex situated in the eastern part of India (Shaltora area, Bankura district, West Bengal) and the hematite ore has been collected from Visakhapatnam Steel Plant (VSP), Visakhapatnam. At VSP, iron ore is received from Bailadila mines, Chattisgarh of M/s. National Mineral Development Corporation. The preliminary characterization of TMO and hematite ore (HMO) has been investigated by WDXRF, XRD and FESEM analyses. Similarly, good quality of coal (mainly coking coal) is also getting depleted fast. The basic purpose of this work is to find how lean grade coal can be utilised along with TMO for smelting to produce pig iron. Lean grade coal has been characterised by using TG/DTA, proximate and ultimate analyses. The boiler grade coal has been found to contain 28.08% of fixed carbon and 28.31% of volatile matter. TMO fines (below 75 μm) and HMO fines (below 75 μm) have been separately agglomerated with lean grade coal fines (below 75 μm) in the form of briquettes using binders like bentonite and molasses. These green briquettes are dried first in oven at 423 K for 30 min and then reduced isothermally in tube furnace over the temperature range of 1323 K, 1373 K and 1423 K for 30 min & 60 min. After reduction, the reduced briquettes are characterized by XRD and FESEM analyses. The best reduced TMO and HMO samples are taken and blended in three different weight percentage ratios of 1:4, 1:8 and 1:12 of TMO:HMO. The chemical analysis of three blended samples is carried out and degree of metallisation of iron is found to contain 89.38%, 92.12% and 93.12%, respectively. These three blended samples are briquetted using binder like bentonite and lime. Thereafter these blended briquettes are separately smelted in raising hearth furnace at 1773 K for 30 min. The pig iron formed is characterized using XRD, microscopic analysis. It can be concluded that 90% yield of pig iron can be achieved when the blend ratio of TMO:HMO is 1:4.5. This means for 90% yield, the maximum TMO that could be used in the blend is about 18%.
Keywords: Briquetting reduction, lean grade coal, smelting reduction, TMO.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921129 A Low-Power Two-Stage Seismic Sensor Scheme for Earthquake Early Warning System
Authors: Arvind Srivastav, Tarun Kanti Bhattacharyya
Abstract:
The north-eastern, Himalayan, and Eastern Ghats Belt of India comprise of earthquake-prone, remote, and hilly terrains. Earthquakes have caused enormous damages in these regions in the past. A wireless sensor network based earthquake early warning system (EEWS) is being developed to mitigate the damages caused by earthquakes. It consists of sensor nodes, distributed over the region, that perform majority voting of the output of the seismic sensors in the vicinity, and relay a message to a base station to alert the residents when an earthquake is detected. At the heart of the EEWS is a low-power two-stage seismic sensor that continuously tracks seismic events from incoming three-axis accelerometer signal at the first-stage, and, in the presence of a seismic event, triggers the second-stage P-wave detector that detects the onset of P-wave in an earthquake event. The parameters of the P-wave detector have been optimized for minimizing detection time and maximizing the accuracy of detection.Working of the sensor scheme has been verified with seven earthquakes data retrieved from IRIS. In all test cases, the scheme detected the onset of P-wave accurately. Also, it has been established that the P-wave onset detection time reduces linearly with the sampling rate. It has been verified with test data; the detection time for data sampled at 10Hz was around 2 seconds which reduced to 0.3 second for the data sampled at 100Hz.Keywords: Earthquake early warning system, EEWS, STA/LTA, polarization, wavelet, event detector, P-wave detector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781128 Wind Energy Development in the African Great Lakes Region to Supplement the Hydroelectricity in the Locality: A Case Study from Tanzania
Authors: R.M. Kainkwa
Abstract:
The African Great Lakes Region refers to the zone around lakes Victoria, Tanganyika, Albert, Edward, Kivu, and Malawi. The main source of electricity in this region is hydropower whose systems are generally characterized by relatively weak, isolated power schemes, poor maintenance and technical deficiencies with limited electricity infrastructures. Most of the hydro sources are rain fed, and as such there is normally a deficiency of water during the dry seasons and extended droughts. In such calamities fossil fuels sources, in particular petroleum products and natural gas, are normally used to rescue the situation but apart from them being nonrenewable, they also release huge amount of green house gases to our environment which in turn accelerates the global warming that has at present reached an amazing stage. Wind power is ample, renewable, widely distributed, clean, and free energy source that does not consume or pollute water. Wind generated electricity is one of the most practical and commercially viable option for grid quality and utility scale electricity production. However, the main shortcoming associated with electric wind power generation is fluctuation in its output both in space and time. Before making a decision to establish a wind park at a site, the wind speed features there should therefore be known thoroughly as well as local demand or transmission capacity. The main objective of this paper is to utilise monthly average wind speed data collected from one prospective site within the African Great Lakes Region to demonstrate that the available wind power there is high enough to generate electricity. The mean monthly values were calculated from records gathered on hourly basis for a period of 5 years (2001 to 2005) from a site in Tanzania. The documentations that were collected at a height of 2 m were projected to a height of 50 m which is the standard hub height of wind turbines. The overall monthly average wind speed was found to be 12.11 m/s whereas June to November was established to be the windy season as the wind speed during the session is above the overall monthly wind speed. The available wind power density corresponding to the overall mean monthly wind speed was evaluated to be 1072 W/m2, a potential that is worthwhile harvesting for the purpose of electric generation.Keywords: Hydro power, windy season, available wind powerdensity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633127 User-Perceived Quality Factors for Certification Model of Web-Based System
Authors: Jamaiah H. Yahaya, Aziz Deraman, Abdul Razak Hamdan, Yusmadi Yah Jusoh
Abstract:
One of the most essential issues in software products is to maintain it relevancy to the dynamics of the user’s requirements and expectation. Many studies have been carried out in quality aspect of software products to overcome these problems. Previous software quality assessment models and metrics have been introduced with strengths and limitations. In order to enhance the assurance and buoyancy of the software products, certification models have been introduced and developed. From our previous experiences in certification exercises and case studies collaborating with several agencies in Malaysia, the requirements for user based software certification approach is identified and demanded. The emergence of social network applications, the new development approach such as agile method and other varieties of software in the market have led to the domination of users over the software. As software become more accessible to the public through internet applications, users are becoming more critical in the quality of the services provided by the software. There are several categories of users in web-based systems with different interests and perspectives. The classifications and metrics are identified through brain storming approach with includes researchers, users and experts in this area. The new paradigm in software quality assessment is the main focus in our research. This paper discusses the classifications of users in web-based software system assessment and their associated factors and metrics for quality measurement. The quality model is derived based on IEEE structure and FCM model. The developments are beneficial and valuable to overcome the constraints and improve the application of software certification model in future.
Keywords: Software certification model, user centric approach, software quality factors, metrics and measurements, web-based system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2148126 Expanding Affordable Housing through Inclusionary Zoning in the City of Toronto
Authors: Sam Moshaver
Abstract:
Reasonably priced and well-constructed housing must be an integral and element supporting a healthy society. The absence of housing everyone in society can afford negatively affects the people's health, education, ability to get jobs, develop their community. Without access to decent housing, economic development, integration of immigrants and inclusiveness, the society is negatively impacted. Canada has a sterling record in creating housing compared to many other nations around the globe. Canadian housing gets support from a mature and responsive mortgage network and a top-quality construction industry as well as safe and excellent quality building materials that are readily available. Yet 1.7 million Canadian households occupy substandard abodes. During the past hundred years, Canada's government has made a wide variety of attempts to provide decent residential facilities every Canadian can afford. Despite these laudable efforts, today Canada is left with housing that is inadequate for many Canadians. People who own their housing are given all kinds of privileges and perks, while people with relatively low incomes who rent their apartments or houses are discriminated against. To help solve these problems, zoning that is based on an "inclusionary" philosophy is tool developed to help provide people the affordable residences that they need. No, thirty years after its introduction, this type of zoning has been shown effective in helping build and provide Canadians with a houses or apartments they can afford to pay for. Using this form of zoning can have different results +depending on where and how it is used. After examining Canadian affordable housing and four American cases where this type of zoning was enforced in the USA, this makes various recommendations for expanding Canadians' access to housing they can afford.Keywords: Affordable Housing, Inclusionary Zoning Low- Income Housing, Toronto Housing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066125 Time Series Forecasting Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed length window in the past as an explicit input. In this paper, we study how the performance of predictive models change as a function of different look-back window sizes and different amounts of time to predict into the future. We also consider the performance of the recent attention-based transformer models, which had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), Gated Recurrent Units (GRU), and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the website of University of California, Irvine (UCI), which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Absolute Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.
Keywords: Air quality prediction, deep learning algorithms, time series forecasting, look-back window.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1171124 Interest of the Sequences Pseudo Noises Codes of Different Lengths for the Reduction from the Interference between Users of CDMA Network
Authors: Nerguè Kassahan Kone, Souleymane Oumtanaga
Abstract:
The third generation (3G) of cellular system adopted the spread spectrum as solution for the transmission of the data in the physical layer. Contrary to systems IS-95 or CDMAOne (systems with spread spectrum of the preceding generation), the new standard, called Universal Mobil Telecommunications System (UMTS), uses long codes in the down link. The system is conceived for the vocal communication and the transmission of the data. In particular, the down link is very important, because of the asymmetrical request of the data, i.e., more remote loading towards the mobiles than towards the basic station. Moreover, the UMTS uses for the down link an orthogonal spreading out with a variable factor of spreading out (OVSF for Orthogonal Variable Spreading Factor). This characteristic makes it possible to increase the flow of data of one or more users by reducing their factor of spreading out without changing the factor of spreading out of other users. In the current standard of the UMTS, two techniques to increase the performances of the down link were proposed, the diversity of sending antenna and the codes space-time. These two techniques fight only fainding. The receiver proposed for the mobil station is the RAKE, but one can imagine a receiver more sophisticated, able to reduce the interference between users and the impact of the coloured noise and interferences to narrow band. In this context, where the users have long codes synchronized with variable factor of spreading out and ignorance by the mobile of the other active codes/users, the use of the sequences of code pseudo-noises different lengths is presented in the form of one of the most appropriate solutions.Keywords: DS-CDMA, multiple access interference, ratio Signal / interference + Noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1352123 Modeling the Influence of Socioeconomic and Land-Use Factors on Mode Choice: A Comparison of Riyadh, Saudi Arabia, and Melbourne, Australia
Authors: M. Alqhatani, S. Bajwa, S. Setunge
Abstract:
Metropolitan areas have suffered from traffic problems, which have steadily increased in many monocentric cities. Urban expansion, population growth, and road network development have resulted in a structural shift toward urban sprawl, increasing commuters’ dependence on private modes of transport. This paper aims to model the influence of socioeconomic and land-use factors on mode choice using a multinomial and nested logit model. Land-use patterns—such as residential, commercial, retail, educational and employment related—affect the choice of mode and destination in the short and medium term. Socioeconomic factors—such as age, gender, income, household size, and house type—also affect choice, while residential location is affected in the long term. Riyadh in Saudi Arabia and Melbourne in Australia were chosen as case studies. Riyadh is a car-dependent city with limited public transport, whereas Melbourne has good public transport but an increase in car dependence. Aggregate level land-use data and disaggregate level individual, household, and journey-to-work data are used to determine the effects of land use and socioeconomic factors on mode choice. The model results determined that urban sprawl is the main factor that affects mode choice, income, and house type.
Keywords: Socioeconomic, land use, mode choice, multinomial logit and nested logit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2446122 Supplier Selection Using Sustainable Criteria in Sustainable Supply Chain Management
Authors: Richa Grover, Rahul Grover, V. Balaji Rao, Kavish Kejriwal
Abstract:
Selection of suppliers is a crucial problem in the supply chain management. On top of that, sustainable supplier selection is the biggest challenge for the organizations. Environment protection and social problems have been of concern to society in recent years, and the traditional supplier selection does not consider about this factor; therefore, this research work focuses on introducing sustainable criteria into the structure of supplier selection criteria. Sustainable Supply Chain Management (SSCM) is the management and administration of material, information, and money flows, as well as coordination among business along the supply chain. All three dimensions - economic, environmental, and social - of sustainable development needs to be taken care of. Purpose of this research is to maximize supply chain profitability, maximize social wellbeing of supply chain and minimize environmental impacts. Problem statement is selection of suppliers in a sustainable supply chain network by ranking the suppliers against sustainable criteria identified. The aim of this research is twofold: To find out what are the sustainable parameters that can be applied to the supply chain, and to determine how these parameters can effectively be used in supplier selection. Multicriteria decision making tools will be used to rank both criteria and suppliers. AHP Analysis will be used to find out ratings for the criteria identified. It is a technique used for efficient decision making. TOPSIS will be used to find out rating for suppliers and then ranking them. TOPSIS is a MCDM problem solving method which is based on the principle that the chosen option should have the maximum distance from the negative ideal solution (NIS) and the minimum distance from the ideal solution.Keywords: Sustainable supply chain management, supplier selection, MCDM tools, AHP analysis, TOPSIS method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3490121 A New Multi-Target, Multi-Agent Search-and-Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.
Keywords: Search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2480120 Tool Wear of Metal Matrix Composite 10wt% AlN Reinforcement Using TiB2 Cutting Tool
Authors: M. S. Said, J. A. Ghani, Che Hassan C. H., N. N. Wan, M. A. Selamat, R. Othman
Abstract:
Metal matrix composites (MMCs) attract considerable attention as a result from its ability in providing a high strength, high modulus, high toughness, high impact properties, improving wear resistance and providing good corrosion resistance compared to unreinforced alloy. Aluminium Silicon (Al/Si) alloy MMC has been widely used in various industrial sectors such as in transportation, domestic equipment, aerospace, military, construction, etc. Aluminium silicon alloy is an MMC that had been reinforced with aluminium nitrate (AlN) particle and become a new generation material use in automotive and aerospace sector. The AlN is one of the advance material that have a bright prospect in future since it has features such as lightweight, high strength, high hardness and stiffness quality. However, the high degree of ceramic particle reinforcement and the irregular nature of the particles along the matrix material that contribute to its low density is the main problem which leads to difficulties in machining process. This paper examined the tool wear when milling AlSi/AlN Metal Matrix Composite using a TiB2 (Titanium diboride) coated carbide cutting tool. The volume of the AlN reinforced particle was 10% and milling process was carried out under dry cutting condition. The TiB2 coated carbide insert parameters used were at the cutting speed of (230, 300 and 370m/min, feed rate of 0.8, Depth of Cut (DoC) at 0.4m). The Sometech SV-35 video microscope system used to quantify of the tool wear. The result shown that tool life span increasing with the cutting speeds at (370m/min, feed rate of 0.8mm/tooth and DoC at 0.4mm) which constituted an optimum condition for longer tool life lasted until 123.2 mins. Meanwhile, at medium cutting speed which at 300m/m, feed rate of 0.8mm/tooth and depth of cut at 0.4mm we found that tool life span lasted until 119.86 mins while at low cutting speed it lasted in 119.66 mins. High cutting speed will give the best parameter in cutting AlSi/AlN MMCs material. The result will help manufacturers in machining process of AlSi/AlN MMCs materials.
Keywords: AlSi/AlN Metal Matrix Composite milling process, tool wear, TiB2 coated cemented carbide tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3196