Search results for: prewitt edge detection algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7256

Search results for: prewitt edge detection algorithm

3806 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 284
3805 Vibro-Acoustic Modulation for Crack Detection in Windmill Blades

Authors: Abdullah Alnutayfat, Alexander Sutin

Abstract:

One of the most important types of renewable energy resources is wind energy which can be produced by wind turbines. The blades of the wind turbine are exposed to the pressure of the harsh environment, which causes a significant issue for the wind power industry in terms of the maintenance cost and failure of blades. One of the reliable methods for blade inspection is the vibroacoustic structural health monitoring (SHM) method which examines information obtained from the structural vibrations of the blade. However, all vibroacoustic SHM techniques are based on comparing the structural vibration of intact and damaged structures, which places a practical limit on their use. Methods for nonlinear vibroacoustic SHM are more sensitive to damage and cracking and do not need to be compared to data from the intact structure. This paper presents the Vibro-Acoustic Modulation (VAM) method based on the modulation of high-frequency (probe wave) by low-frequency loads (pump wave) produced by the blade rotation. The blade rotation alternates bending stress due to gravity, leading to crack size variations and variations in the blade resonance frequency. This method can be used with the classical SHM vibration method in which the blade is excited by piezoceramic actuator patches bonded to the blade and receives the vibration response from another piezoceramic sensor. The VAM modification of this method analyzes the spectra of the detected signal and their sideband components. We suggest the VAM model as the simple mechanical oscillator, where the parameters of the oscillator (resonance frequency and damping) are varied due to low-frequency blade rotation. This model uses the blade vibration parameters and crack influence on the blade resonance properties from previous research papers to predict the modulation index (MI).

Keywords: wind turbine blades, damaged detection, vibro-acoustic structural health monitoring, vibro-acoustic modulation

Procedia PDF Downloads 89
3804 Resolution and Experimental Validation of the Asymptotic Model of a Viscous Laminar Supersonic Flow around a Thin Airfoil

Authors: Eddegdag Nasser, Naamane Azzeddine, Radouani Mohammed, Ensam Meknes

Abstract:

In this study, we are interested in the asymptotic modeling of the two-dimensional stationary supersonic flow of a viscous compressible fluid around wing airfoil. The aim of this article is to solve the partial differential equations of the flow far from the leading edge and near the wall using the triple-deck technique is what brought again in precision according to the principle of least degeneration. In order to validate our theoretical model, these obtained results will be compared with the experimental results. The comparison of the results of our model with experimentation has shown that they are quantitatively acceptable compared to the obtained experimental results. The experimental study was conducted using the AF300 supersonic wind tunnel and a NACA Reduced airfoil model with two pressure Taps on extrados. In this experiment, we have considered the incident upstream supersonic Mach number over a dissymmetric NACA airfoil wing. The validation and the accuracy of the results support our model.

Keywords: supersonic, viscous, triple deck technique, asymptotic methods, AF300 supersonic wind tunnel, reduced airfoil model

Procedia PDF Downloads 245
3803 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique

Authors: Dibakar Chakrabarty, Mebada Suiting

Abstract:

Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.

Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM

Procedia PDF Downloads 255
3802 Information Visualization Methods Applied to Nanostructured Biosensors

Authors: Osvaldo N. Oliveira Jr.

Abstract:

The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.

Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique

Procedia PDF Downloads 342
3801 'Sea Power: Concept, Influence and Securitization'; the Nigerian Navy's Role in a Developing State like Nigeria

Authors: William Abiodun Duyile

Abstract:

It is common knowledge that marine food has always been found from the sea, energy can also be found underneath and, to a growing extent; other mineral resources have come from the sea spaces. It is the importance of the sea and the sea lines of communication to littoral nations that has made concepts such as sea power, naval power, etc., significant to them. The study relied on documentary data. The documentary data were sourced from government annual departmental reports, newspapers and correspondence. The secondary sources used were subjected to internal and external criticism for authentication, and then to textual and contextual analyses. The study found that the differential level of seamanship amongst states defined their relationship. It was sea power that gave some states an edge over the others. The study proves that over the ages sea power has been core to the development of States or Empires. The study found that the Nigerian Navy was centre to Nigeria’s conquest of the littoral areas of Biafra, like Bonny, Port-Harcourt, and Calabar; it was also an important turning point of the Nigerian civil war since by it Biafra became landlocked. The research was able to identify succinctly the Nigerian Navy’s contribution to the security and development of the Nigerian State.

Keywords: sea power, naval power, land locked states, warship

Procedia PDF Downloads 143
3800 Physicochemical Characterization of Asphalt Ridge Froth Bitumen

Authors: Nader Nciri, Suil Song, Namho Kim, Namjun Cho

Abstract:

Properties and compositions of bitumen and bitumen-derived liquids have significant influences on the selection of recovery, upgrading and refining processes. Optimal process conditions can often be directly related to these properties. The end uses of bitumen and bitumen products are thus related to their compositions. Because it is not possible to conduct a complete analysis of the molecular structure of bitumen, characterization must be made in other terms. The present paper focuses on physico-chemical analysis of two different types of bitumens. These bitumen samples were chosen based on: the original crude oil (sand oil and crude petroleum), and mode of process. The aim of this study is to determine both the manufacturing effect on chemical species and the chemical organization as a function of the type of bitumen sample. In order to obtain information on bitumen chemistry, elemental analysis (C, H, N, S, and O), heavy metal (Ni, V) concentrations, IATROSCAN chromatography (thin layer chromatography-flame ionization detection), FTIR spectroscopy, and 1H NMR spectroscopy have all been used. The characterization includes information about the major compound types (saturates, aromatics, resins and asphaltenes) which can be compared with similar data for other bitumens, more importantly, can be correlated with data from petroleum samples for which refining characteristics are known. Examination of Asphalt Ridge froth bitumen showed that it differed significantly from representative petroleum pitches, principally in their nonhydrocarbon content, heavy metal content and aromatic compounds. When possible, properties and composition were related to recovery and refining processes. This information is important because of the effects that composition has on recovery and processing reactions.

Keywords: froth bitumen, oil sand, asphalt ridge, petroleum pitch, thin layer chromatography-flame ionization detection, infrared spectroscopy, 1H nuclear magnetic resonance spectroscopy

Procedia PDF Downloads 431
3799 Technical Parameters Evaluation for Caps to Apucarana/Parana - Brazil APL

Authors: Cruz, G. P., Nagamatsu, R. N., Scacchetti, F. A. P., Merlin, F. K.

Abstract:

This study aims to assess a set of technical parameters that provide quality products to the companies that produce caps, APL Apucarana / PR, the city that produces most Brazilian caps, in order to verify the potential of Brazilian caps to compete with international brands, recognized by the standard of excellence when it comes to quality of its products. The determination of the technical parameters was arbitrated from textile ABNT, a total of six technical parameters, providing eight tests for cotton caps. For the evaluation, we used as reference a leading brand recognized worldwide (based on their sales volume in $) for comparison with 3 companies of the APL Apucarana. The results showed that, of the 8 tests, of 8 tests, the companies Apucarana did not obtain better performance than the competitor. They obtained the same results in three tests and lower performance in 5. Given these values, it is concluded that local caps are not far from reaching the quality of leading brand. It is recommended that the APL companies use the parameters to evaluate their products, using this information to support decision-making that seek to improve both the product design and its production process, enabling the feasibility for faster international recognition . Thus, they may have an edge over its main competitor.

Keywords: technical parameters, making caps, quality, evaluation

Procedia PDF Downloads 348
3798 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 123
3797 Isolation and Molecular Detection of Marek’s Disease Virus from Outbreak Cases in Chicken in South Western Ethiopia

Authors: Abdela Bulbula

Abstract:

Background: Marek’s disease virus is a devastating infection, causing high morbidity and mortality in chickens in Ethiopia. Methods: The current study was conducted from March to November, 2021 with the general objective of performing antemortem and postmortem, isolation, and molecular detection of Marek’s disease virus from outbreak cases in southwestern Ethiopia. Accordingly, based on outbreak information reported from the study sites namely, Bedelle, Yayo, and Bonga towns in southwestern Ethiopia, 50 sick chickens were sampled. The backyard and intensive farming systems of chickens were included in the sampling and priorities were given for chickens that showed clinical signs that are characteristics of Marek’s disease. Results: By clinical examinations, paralysis of legs and wings, gray eye, loss of weight, difficulty in breathing, and depression were recorded on all chickens sampled for this study and death of diseased chickens was observed. In addition, enlargement of the spleen and gross lesions of the liver and heart were recorded during postmortem examination. The death of infected chickens was observed in both vaccinated and non-vaccinated flocks. Out of 50 pooled feather follicle samples, Marek’s disease virus was isolated from 14/50 (28%) by cell culture method and out of six tissue samples, the virus was isolated from 5/6(83.30%). By Real time polymerization chain reaction technique, which was targeted to detect the Meq gene, Marek’s disease virus was detected from 18/50 feather follicles which accounts for 36% of sampled chickens. Conclusion: In general, the current study showed that the circulating Marek’s disease virus in southwestern Ethiopia was caused by the oncogenic Gallid herpesvirus-2 (Serotype-1). Further research on molecular characterization of revolving virus in current and other regions is recommended for effective control of the disease through vaccination.

Keywords: Ethioi, Marek's disease, isolation, molecular

Procedia PDF Downloads 76
3796 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 128
3795 Dispersion-Less All Reflective Split and Delay Unit for Ultrafast Metrology

Authors: Akansha Tyagi, Mehar S. Sidhu, Ankur Mandal, Sanjay Kapoor, Sunil Dahiya, Jan M. Rost, Thomas Pfeifer, Kamal P. Singh

Abstract:

An all-reflective split and delay unit is designed for dispersion free measurement of broadband ultrashort pulses using a pair of reflective knife edge prism for splitting and recombining of the measuring pulse. It is based on symmetrical wavefront splitting of the measuring pulse having two separate arms to independently shape both split parts. We have validated our delay line with NIR –femtosecond pulse measurement centered at 800 nm using second harmonic-Interferometric frequency resolved optical gating (SH-IFROG). The delay line is compact, easy to align and provides attosecond stability and precision and thus make it more versatile for wide range of applications in ultrafast measurements. We envision that the present delay line will find applications in IR-IR controlling for high harmonic generation (HHG) and attosecond IR-XUV pump-probe measurements with solids and gases providing attosecond resolution and wide delay range.

Keywords: HHG, nonlinear optics, pump-probe spectroscopy, ultrafast metrology

Procedia PDF Downloads 207
3794 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load

Authors: Sanjin Kršćanski, Josip Brnić

Abstract:

Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.

Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending

Procedia PDF Downloads 309
3793 An Experience of HIV Testing and Counseling Services at a Tertiary Care Center of Bangladesh

Authors: S. M. Rashed Ul Islam, Shahina Tabassum, Afsana Anwar Miti

Abstract:

Objective: HIV testing and counseling center (HTC) is an important component of the HIV/AIDS detection, prevention and control interventions. The service was first initiated at the Department of Virology, Bangabandhu Sheikh Mujib Medical University (BSMMU) since the first case detection in 1989. The present study aimed to describe the demographic profile among the attendees tested HIV positive. Methods: The present study was carried out among 219 HIV positive cases detected through screening at the Department of Virology of BSMMU during the year of 2012-2016. Data were collected through pre-structured written questionnaire during the counseling session. Data were expressed as frequency and percentages and analyzed using SPSS v20.0 program. Results: Out of 219 HIV cases detected, 77.6% were males, and 22.4% were females with a mean age (mean±SD) of 35.46±9.46 years. Among them, 70.7% belonged to the 26-45 age groups representing the sexually active age. The majority of the cases were married (86.3%) and 49.8% had primary level of education whereas, 8.7% were illiterate. Nearly 42% of cases were referred from Chittagong division (south-east part of the country) followed by Dhaka division (35.6%). The bulk of study population admitted to involvement in high-risk behaviour (90%) in the past and 42% of them had worked overseas. The Pearson Chi-square (χ2) analysis revealed significant relationship of gender with marital (χ2=7.88 at 2% level) and occupation status (χ2=120.48 at 6% level); however, no association was observed with risk behaviour and educational status. Recommendations: HIV risk behavior was found to be a prime source for HIV infection among the study population. So, there is need for health education and awareness program to bring about behavioral changes to halt the yearly increase of new cases in the country with special attention to our overseas workers on HIV/AIDS risk and safety.

Keywords: Bangladesh, health education, HIV testing and counseling (HTC), HIV/AIDS, risk behavior

Procedia PDF Downloads 299
3792 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 320
3791 Introduction to Multi-Agent Deep Deterministic Policy Gradient

Authors: Xu Jie

Abstract:

As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.

Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents

Procedia PDF Downloads 31
3790 Analytical Model of Multiphase Machines Under Electrical Faults: Application on Dual Stator Asynchronous Machine

Authors: Nacera Yassa, Abdelmalek Saidoune, Ghania Ouadfel, Hamza Houassine

Abstract:

The rapid advancement in electrical technologies has underscored the increasing importance of multiphase machines across various industrial sectors. These machines offer significant advantages in terms of efficiency, compactness, and reliability compared to their single-phase counterparts. However, early detection and diagnosis of electrical faults remain critical challenges to ensure the durability and safety of these complex systems. This paper presents an advanced analytical model for multiphase machines, with a particular focus on dual stator asynchronous machines. The primary objective is to develop a robust diagnostic tool capable of effectively detecting and locating electrical faults in these machines, including short circuits, winding faults, and voltage imbalances. The proposed methodology relies on an analytical approach combining electrical machine theory, modeling of magnetic and electrical circuits, and advanced signal analysis techniques. By employing detailed analytical equations, the developed model accurately simulates the behavior of multiphase machines in the presence of electrical faults. The effectiveness of the proposed model is demonstrated through a series of case studies and numerical simulations. In particular, special attention is given to analyzing the dynamic behavior of machines under different types of faults, as well as optimizing diagnostic and recovery strategies. The obtained results pave the way for new advancements in the field of multiphase machine diagnostics, with potential applications in various sectors such as automotive, aerospace, and renewable energies. By providing precise and reliable tools for early fault detection, this research contributes to improving the reliability and durability of complex electrical systems while reducing maintenance and operation costs.

Keywords: faults, diagnosis, modelling, multiphase machine

Procedia PDF Downloads 70
3789 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus

Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya

Abstract:

Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.

Keywords: driverless vehicle, path planning, sensor fusion, state estimate

Procedia PDF Downloads 149
3788 Practice of Supply Chain Management in Local SMEs

Authors: Oualid Kherbach, Marian Liviu Mocan, Amine Ghoumrassi, Cristian Dumitrache

Abstract:

The Globalization system and the development of economy, e-business, and introduction of new technologies formation create new challenges to all organizations particularly for small and medium enterprises (SMEs). Many studies on supply chain management (SCM) focus on large companies with universal operations employing high-stage information technology. These make a gap in the knowing of how SMEs use and practice supply chain management. In this screenplay, successful practices of supply chain management (SCM) can give SMEs an edge over their competitors. However, SMEs in Romania and Balkan countries face problems in SCM implementation and practices due to lack of resources and direction. The objectives of this research highlight the supply chain management practices of the small and medium enterprise strip in Romania and understand how SMEs manage and use SCM. This study Checks the potential existence of systematic differences between small businesses and medium-sized businesses with regard to supply chain management practices and the application of supply management has contributed to the improvement performance and increase the profitability of companies such as increasing the market share and improving the level of clients.

Keywords: globalization, small and medium enterprises, supply chain management, practices

Procedia PDF Downloads 374
3787 An Attentional Bi-Stream Sequence Learner (AttBiSeL) for Credit Card Fraud Detection

Authors: Mohsen Hasirian, Amir Shahab Shahabi

Abstract:

Modern societies, marked by expansive Internet connectivity and the rise of e-commerce, are now integrated with digital platforms at an unprecedented level. The efficiency, speed, and accessibility of e-commerce have garnered a substantial consumer base. Against this backdrop, electronic banking has undergone rapid proliferation within the realm of online activities. However, this growth has inadvertently given rise to an environment conducive to illicit activities, notably electronic payment fraud, posing a formidable challenge to the domain of electronic banking. A pivotal role in upholding the integrity of electronic commerce and business transactions is played by electronic fraud detection, particularly in the context of credit cards which underscores the imperative of comprehensive research in this field. To this end, our study introduces an Attentional Bi-Stream Sequence Learner (AttBiSeL) framework that leverages attention mechanisms and recurrent networks. By incorporating bidirectional recurrent layers, specifically bidirectional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) layers, the proposed model adeptly extracts past and future transaction sequences while accounting for the temporal flow of information in both directions. Moreover, the integration of an attention mechanism accentuates specific transactions to varying degrees, as manifested in the output of the recurrent networks. The effectiveness of the proposed approach in automatic credit card fraud classification is evaluated on the European Cardholders' Fraud Dataset. Empirical results validate that the hybrid architectural paradigm presented in this study yields enhanced accuracy compared to previous studies.

Keywords: credit card fraud, deep learning, attention mechanism, recurrent neural networks

Procedia PDF Downloads 53
3786 Experimental Study Damage in a Composite Structure by Vibration Analysis- Glass / Polyester

Authors: R. Abdeldjebar, B. Labbaci, L. Missoum, B. Moudden, M. Djermane

Abstract:

The basic components of a composite material made him very sensitive to damage, which requires techniques for detecting damage reliable and efficient. This work focuses on the detection of damage by vibration analysis, whose main objective is to exploit the dynamic response of a structure to detect understand the damage. The experimental results are compared with those predicted by numerical models to confirm the effectiveness of the approach.

Keywords: experimental, composite, vibration analysis, damage

Procedia PDF Downloads 677
3785 In-Situ Studies of Cyclohexane Oxidation Using Laser Raman Spectroscopy for the Refinement of Mechanism Based Kinetic Models

Authors: Christine Fräulin, Daniela Schurr, Hamed Shahidi Rad, Gerrit Waters, Günter Rinke, Roland Dittmeyer, Michael Nilles

Abstract:

The reaction mechanisms of many liquid-phase reactions in organic chemistry have not yet been sufficiently clarified. Process conditions of several hundred degrees celsius and pressures to ten megapascals complicate the sampling and the determination of kinetic data. Space resolved in-situ measurements promises new insights. A non-invasive in-situ measurement technique has the advantages that no sample preparation is necessary, there is no change in sample mixture before analysis and the sampling do no lead to interventions in the flow. Thus, the goal of our research was the development of a contact-free spatially resolved measurement technique for kinetic studies of liquid phase reaction under process conditions. Therefore we used laser Raman spectroscopy combined with an optical transparent microchannel reactor. To show the performance of the system we choose the oxidation of cyclohexane as sample reaction. Cyclohexane oxidation is an economically important process. The products are intermediates for caprolactam and adipic acid, which are starting materials for polyamide 6 and 6.6 production. To maintain high selectivities of 70 to 90 %, the reaction is performed in industry at a low conversion of about six percent. As Raman spectroscopy is usually very selective but not very sensitive the detection of the small product concentration in cyclohexane oxidation is quite challenging. To meet these requirements, an optical experimental setup was optimized to determine the concentrations by laser Raman spectroscopy with respect to good detection sensitivity. With this measurement technique space resolved kinetic studies of uncatalysed and homogeneous catalyzed cyclohexane oxidation were carried out to obtain details about the reaction mechanism.

Keywords: in-situ laser raman spectroscopy, space resolved kinetic measurements, homogeneous catalysis, chemistry

Procedia PDF Downloads 337
3784 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 140
3783 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 289
3782 Standard Protocol Selection for Acquisition of Breast Thermogram in Perspective of Early Breast Cancer Detection

Authors: Mrinal Kanti Bhowmik, Usha Rani Gogoi Jr., Anjan Kumar Ghosh, Debotosh Bhattacharjee

Abstract:

In the last few decades, breast thermography has achieved an average sensitivity and specificity of 90% for breast tumor detection. Breast thermography is a non-invasive, cost-effective, painless and radiation-free breast imaging modality which makes a significant contribution to the evaluation and diagnosis of patients, suspected of having breast cancer. An abnormal breast thermogram may indicate significant biological risk for the existence or the development of breast tumors. Breast thermography can detect a breast tumor, when the tumor is in its early stage or when the tumor is in a dense breast. The infrared breast thermography is very sensitive to environmental changes for which acquisition of breast thermography should be performed under strictly controlled conditions by undergoing some standard protocols. Several factors like air, temperature, humidity, etc. are there to be considered for characterizing thermal images as an imperative tool for detecting breast cancer. A detailed study of various breast thermogram acquisition protocols adopted by different researchers in their research work is provided here in this paper. After going through a rigorous study of different breast thermogram acquisition protocols, a new standard breast thermography acquisition setup is proposed here in this paper for proper and accurate capturing of the breast thermograms. The proposed breast thermogram acquisition setup is being built in the Radiology Department, Agartala Government Medical College (AGMC), Govt. of Tripura, Tripura, India. The breast thermograms are captured using FLIR T650sc thermal camera with the thermal sensitivity of 20 mK at 30 degree C. The paper is an attempt to highlight the importance of different critical parameters of breast thermography like different thermography views, patient preparation protocols, acquisition room requirements, acquisition system requirements, etc. This paper makes an important contribution by providing a detailed survey and a new efficient approach on breast thermogram capturing.

Keywords: acquisition protocol, breast cancer, breast thermography, infrared thermography

Procedia PDF Downloads 401
3781 Analysis of Nonlinear Pulse Propagation Characteristics in Semiconductor Optical Amplifier for Different Input Pulse Shapes

Authors: Suchi Barua, Narottam Das, Sven Nordholm, Mohammad Razaghi

Abstract:

This paper presents nonlinear pulse propagation characteristics for different input optical pulse shapes with various input pulse energy levels in semiconductor optical amplifiers. For simulation of nonlinear pulse propagation, finite-difference beam propagation method is used to solve the nonlinear Schrödinger equation. In this equation, gain spectrum dynamics, gain saturation are taken into account which depends on carrier depletion, carrier heating, spectral-hole burning, group velocity dispersion, self-phase modulation and two photon absorption. From this analysis, we obtained the output waveforms and spectra for different input pulse shapes as well as for different input energies. It shows clearly that the peak position of the output waveforms are shifted toward the leading edge which due to the gain saturation of the SOA for higher input pulse energies. We also analyzed and compared the normalized difference of full-width at half maximum for different input pulse shapes in the SOA.

Keywords: finite-difference beam propagation method, pulse shape, pulse propagation, semiconductor optical amplifier

Procedia PDF Downloads 612
3780 Bounds on the Laplacian Vertex PI Energy

Authors: Ezgi Kaya, A. Dilek Maden

Abstract:

A topological index is a number related to graph which is invariant under graph isomorphism. In theoretical chemistry, molecular structure descriptors (also called topological indices) are used for modeling physicochemical, pharmacologic, toxicologic, biological and other properties of chemical compounds. Let G be a graph with n vertices and m edges. For a given edge uv, the quantity nu(e) denotes the number of vertices closer to u than v, the quantity nv(e) is defined analogously. The vertex PI index defined as the sum of the nu(e) and nv(e). Here the sum is taken over all edges of G. The energy of a graph is defined as the sum of the eigenvalues of adjacency matrix of G and the Laplacian energy of a graph is defined as the sum of the absolute value of difference of laplacian eigenvalues and average degree of G. In theoretical chemistry, the π-electron energy of a conjugated carbon molecule, computed using the Hückel theory, coincides with the energy. Hence results on graph energy assume special significance. The Laplacian matrix of a graph G weighted by the vertex PI weighting is the Laplacian vertex PI matrix and the Laplacian vertex PI eigenvalues of a connected graph G are the eigenvalues of its Laplacian vertex PI matrix. In this study, Laplacian vertex PI energy of a graph is defined of G. We also give some bounds for the Laplacian vertex PI energy of graphs in terms of vertex PI index, the sum of the squares of entries in the Laplacian vertex PI matrix and the absolute value of the determinant of the Laplacian vertex PI matrix.

Keywords: energy, Laplacian energy, laplacian vertex PI eigenvalues, Laplacian vertex PI energy, vertex PI index

Procedia PDF Downloads 250
3779 Forecast Based on an Empirical Probability Function with an Adjusted Error Using Propagation of Error

Authors: Oscar Javier Herrera, Manuel Angel Camacho

Abstract:

This paper addresses a cutting edge method of business demand forecasting, based on an empirical probability function when the historical behavior of the data is random. Additionally, it presents error determination based on the numerical method technique ‘propagation of errors’. The methodology was conducted characterization and process diagnostics demand planning as part of the production management, then new ways to predict its value through techniques of probability and to calculate their mistake investigated, it was tools used numerical methods. All this based on the behavior of the data. This analysis was determined considering the specific business circumstances of a company in the sector of communications, located in the city of Bogota, Colombia. In conclusion, using this application it was possible to obtain the adequate stock of the products required by the company to provide its services, helping the company reduce its service time, increase the client satisfaction rate, reduce stock which has not been in rotation for a long time, code its inventory, and plan reorder points for the replenishment of stock.

Keywords: demand forecasting, empirical distribution, propagation of error, Bogota

Procedia PDF Downloads 633
3778 An Estimating Equation for Survival Data with a Possibly Time-Varying Covariates under a Semiparametric Transformation Models

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

An estimating equation technique is an alternative method of the widely used maximum likelihood methods, which enables us to ease some complexity due to the complex characteristics of time-varying covariates. In the situations, when both the time-varying covariates and left-truncation are considered in the model, the maximum likelihood estimation procedures become much more burdensome and complex. To ease the complexity, in this study, the modified estimating equations those have been given high attention and considerations in many researchers under semiparametric transformation model was proposed. The purpose of this article was to develop the modified estimating equation under flexible and general class of semiparametric transformation models for left-truncated and right censored survival data with time-varying covariates. Besides the commonly applied Cox proportional hazards model, such kind of problems can be also analyzed with a general class of semiparametric transformation models to estimate the effect of treatment given possibly time-varying covariates on the survival time. The consistency and asymptotic properties of the estimators were intuitively derived via the expectation-maximization (EM) algorithm. The characteristics of the estimators in the finite sample performance for the proposed model were illustrated via simulation studies and Stanford heart transplant real data examples. To sum up the study, the bias for covariates has been adjusted by estimating density function for the truncation time variable. Then the effect of possibly time-varying covariates was evaluated in some special semiparametric transformation models.

Keywords: EM algorithm, estimating equation, semiparametric transformation models, time-to-event outcomes, time varying covariate

Procedia PDF Downloads 158
3777 Numerical Analysis of Internal Cooled Turbine Blade Using Conjugate Heat Transfer

Authors: Bhavesh N. Bhatt, Zozimus D. Labana

Abstract:

This work is mainly focused on the analysis of heat transfer of blade by using internal cooling method. By using conjugate heat transfer technology we can effectively compute the cooling and heat transfer analysis of blade. Here blade temperature is limited by materials melting temperature. By using CFD code, we will analyze the blade cooling with the help of CHT method. There are two types of CHT methods. In the first method, we apply coupled CHT method in which all three domains modeled at once, and in the second method, we will first model external domain and then, internal domain of cooling channel. Ten circular cooling channels are used as a cooling method with different mass flow rate and temperature value. This numerical simulation is applied on NASA C3X turbine blade, and results are computed. Here results are showing good agreement with experimental results. Temperature and pressure are high at the leading edge of the blade on stagnation point due to its first faces the flow. On pressure side, shock wave is formed which also make a sudden change in HTC and other parameters. After applying internal cooling, we are succeeded in reducing the metal temperature of blade by some extends.

Keywords: gas turbine, conjugate heat transfer, NASA C3X Blade, circular film cooling channel

Procedia PDF Downloads 339