Search results for: prewitt edge detection algorithm
4133 Diagnosis of Induction Machine Faults by DWT
Authors: Hamidreza Akbari
Abstract:
In this paper, for detection of inclined eccentricity in an induction motor, time–frequency analysis of the stator startup current is carried out. For this purpose, the discrete wavelet transform is used. Data are obtained from simulations, using winding function approach. The results show the validity of the approach for detecting the fault and discriminating with respect to other faults.Keywords: induction machine, fault, DWT, electric
Procedia PDF Downloads 3524132 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation using PINN
Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy
Abstract:
The physics informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary condition to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful to study various optical phenomena.Keywords: deep learning, optical Soliton, neural network, partial differential equation
Procedia PDF Downloads 1334131 'Utsadhara': Rejuvenating the Dead River Edge into an Urban Activity Space along the Banks of River Hooghly
Authors: Aparna Saha, Tuhin Ahmed
Abstract:
West Bengal has a number of important rivers, each with its distinctive character and a story. Traditionally, cities have ‘divulged’ to rivers at the river edges and rivers have been an inseparable part of the urban experience. Considering the research aspect, the area is taken in Barrackpore, a small but important outgrowth of Kolkata Municipal Association, West Bengal. Barrackpore, at present, has ample inadequate public open spaces at the neighborhood level where people of different socio-cultural, economic, and religious backgrounds can come together and engage in various leisure activities, but there is no opportunity either, where people can learn about and explore the rich history of the settlement. Pertaining to these issues forms the backdrop of this research paper which has been conceptualized as a place from space that will bring people back to the river and increase community interactions and will also celebrate and commemorate towards the historical importance of the river and its edges. The entire precinct bordering the river represents the transition from pre-independence (Raj era) to Sepoy phase (Swaraj era), finally culminating into the Gandhian philosophy which is being projected into the already existing Gandhi Ghat. The ultimate aim of the paper entitled ‘Utsadhara- Rejuvenating the dead river edge into an urban activity space along the banks of river Hooghly’ is to create a socio-cultural space keeping the heritage identity intact through judicious use of the water body. Also, a balance is kept between the natural ecosystem and the cosmetic development of the surrounding open spaces. It can be duly achieved by the aforementioned methodology provided in the document, but mainly it would focus into preserving the historic ethnicity of the place by holding its character through various facts and figures as well as features. Most importantly the natural topography of the place is left intact. The second priority is given in terms of hierarchy of well connected public plazas, podiums where people from different socio-economic backgrounds irrespective of age and sex could socialize and reach towards venturing into a cordial relationship with one another. The third priority is to provide a platform for the common mass for showcasing their skills and talent through different art and craft forms which in turn would enhance their individual self and also the community as a whole through economic rise. Apart from this here some spaces are created in accordance to different age groups or class of people. The paper intends to see the river as a major multifunctional public space to attract people for different activities and re-establish the relationship of the river with the settlement. Hence, it is apprehended that the paper is not only intended to a simple riverfront conservation project but unlike others it is a place which is created for the people, by the people and of the people towards a holistic community development through a sustainable approach.Keywords: holistic community development, public activity space, river-urban precinct, urban dead space
Procedia PDF Downloads 1384130 An Insite to the Probabilistic Assessment of Reserves in Conventional Reservoirs
Authors: Sai Sudarshan, Harsh Vyas, Riddhiman Sherlekar
Abstract:
The oil and gas industry has been unwilling to adopt stochastic definition of reserves. Nevertheless, Monte Carlo simulation methods have gained acceptance by engineers, geoscientists and other professionals who want to evaluate prospects or otherwise analyze problems that involve uncertainty. One of the common applications of Monte Carlo simulation is the estimation of recoverable hydrocarbon from a reservoir.Monte Carlo Simulation makes use of random samples of parameters or inputs to explore the behavior of a complex system or process. It finds application whenever one needs to make an estimate, forecast or decision where there is significant uncertainty. First, the project focuses on performing Monte-Carlo Simulation on a given data set using U. S Department of Energy’s MonteCarlo Software, which is a freeware e&p tool. Further, an algorithm for simulation has been developed for MATLAB and program performs simulation by prompting user for input distributions and parameters associated with each distribution (i.e. mean, st.dev, min., max., most likely, etc.). It also prompts user for desired probability for which reserves are to be calculated. The algorithm so developed and tested in MATLAB further finds implementation in Python where existing libraries on statistics and graph plotting have been imported to generate better outcome. With PyQt designer, codes for a simple graphical user interface have also been written. The graph so plotted is then validated with already available results from U.S DOE MonteCarlo Software.Keywords: simulation, probability, confidence interval, sensitivity analysis
Procedia PDF Downloads 3884129 Investigating Dynamic Transition Process of Issues Using Unstructured Text Analysis
Authors: Myungsu Lim, William Xiu Shun Wong, Yoonjin Hyun, Chen Liu, Seongi Choi, Dasom Kim, Namgyu Kim
Abstract:
The amount of real-time data generated through various mass media has been increasing rapidly. In this study, we had performed topic analysis by using the unstructured text data that is distributed through news article. As one of the most prevalent applications of topic analysis, the issue tracking technique investigates the changes of the social issues that identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has limitation that it cannot discover dynamic mutation process of complex social issues. The purpose of this study is to overcome the limitations of the existing issue tracking method. We first derived core issues of each period, and then discover the dynamic mutation process of various issues. In this study, we further analyze the mutation process from the perspective of the issues categories, in order to figure out the pattern of issue flow, including the frequency and reliability of the pattern. In other words, this study allows us to understand the components of the complex issues by tracking the dynamic history of issues. This methodology can facilitate a clearer understanding of complex social phenomena by providing mutation history and related category information of the phenomena.Keywords: Data Mining, Issue Tracking, Text Mining, topic Analysis, topic Detection, Trend Detection
Procedia PDF Downloads 4104128 Automatic Generating CNC-Code for Milling Machine
Authors: Chalakorn Chitsaart, Suchada Rianmora, Mann Rattana-Areeyagon, Wutichai Namjaiprasert
Abstract:
G-code is the main factor in computer numerical control (CNC) machine for controlling the tool-paths and generating the profile of the object’s features. For obtaining high surface accuracy of the surface finish, non-stop operation is required for CNC machine. Recently, to design a new product, the strategy that concerns about a change that has low impact on business and does not consume lot of resources has been introduced. Cost and time for designing minor changes can be reduced since the traditional geometric details of the existing models are applied. In order to support this strategy as the alternative channel for machining operation, this research proposes the automatic generating codes for CNC milling operation. Using this technique can assist the manufacturer to easily change the size and the geometric shape of the product during the operation where the time spent for setting up or processing the machine are reduced. The algorithm implemented on MATLAB platform is developed by analyzing and evaluating the geometric information of the part. Codes are created rapidly to control the operations of the machine. Comparing to the codes obtained from CAM, this developed algorithm can shortly generate and simulate the cutting profile of the part.Keywords: geometric shapes, milling operation, minor changes, CNC Machine, G-code, cutting parameters
Procedia PDF Downloads 3544127 An Improved Image Steganography Technique Based on Least Significant Bit Insertion
Authors: Olaiya Folorunsho, Comfort Y. Daramola, Joel N. Ugwu, Lawrence B. Adewole, Olufisayo S. Ekundayo
Abstract:
In today world, there is a tremendous rise in the usage of internet due to the fact that almost all the communication and information sharing is done over the web. Conversely, there is a continuous growth of unauthorized access to confidential data. This has posed a challenge to information security expertise whose major goal is to curtail the menace. One of the approaches to secure the safety delivery of data/information to the rightful destination without any modification is steganography. Steganography is the art of hiding information inside an embedded information. This research paper aimed at designing a secured algorithm with the use of image steganographic technique that makes use of Least Significant Bit (LSB) algorithm for embedding the data into the bit map image (bmp) in order to enhance security and reliability. In the LSB approach, the basic idea is to replace the LSB of the pixels of the cover image with the Bits of the messages to be hidden without destroying the property of the cover image significantly. The system was implemented using C# programming language of Microsoft.NET framework. The performance evaluation of the proposed system was experimented by conducting a benchmarking test for analyzing the parameters like Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The result showed that image steganography performed considerably in securing data hiding and information transmission over the networks.Keywords: steganography, image steganography, least significant bits, bit map image
Procedia PDF Downloads 2714126 A Sensitive Approach on Trace Analysis of Methylparaben in Wastewater and Cosmetic Products Using Molecularly Imprinted Polymer
Authors: Soukaina Motia, Nadia El Alami El Hassani, Alassane Diouf, Benachir Bouchikhi, Nezha El Bari
Abstract:
Parabens are the antimicrobial molecules largely used in cosmetic products as a preservative agent. Among them, the methylparaben (MP) is the most frequently used ingredient in cosmetic preparations. Nevertheless, their potential dangers led to the development of sensible and reliable methods for their determination in environmental samples. Firstly, a sensitive and selective molecular imprinted polymer (MIP) based on screen-printed gold electrode (Au-SPE), assembled on a polymeric layer of carboxylated poly(vinyl-chloride) (PVC-COOH), was developed. After the template removal, the obtained material was able to rebind MP and discriminate it among other interfering species such as glucose, sucrose, and citric acid. The behavior of molecular imprinted sensor was characterized by Cyclic Voltammetry (CV), Differential Pulse Voltammetry (DPV) and Electrochemical Impedance Spectroscopy (EIS) techniques. Then, the biosensor was found to have a linear detection range from 0.1 pg.mL-1 to 1 ng.mL-1 and a low limit of detection of 0.12 fg.mL-1 and 5.18 pg.mL-1 by DPV and EIS, respectively. For applications, this biosensor was employed to determine MP content in four wastewaters in Meknes city and two cosmetic products (shower gel and shampoo). The operational reproducibility and stability of this biosensor were also studied. Secondly, another MIP biosensor based on tungsten trioxide (WO3) functionalized by gold nanoparticles (Au-NPs) assembled on a polymeric layer of PVC-COOH was developed. The main goal was to increase the sensitivity of the biosensor. The developed MIP biosensor was successfully applied for the MP determination in wastewater samples and cosmetic products.Keywords: cosmetic products, methylparaben, molecularly imprinted polymer, wastewater
Procedia PDF Downloads 3244125 Effect of an Interface Defect in a Patch/Layer Joint under Dynamic Time Harmonic Load
Authors: Elisaveta Kirilova, Wilfried Becker, Jordanka Ivanova, Tatyana Petrova
Abstract:
The study is a continuation of the research on the hygrothermal piezoelectric response of a smart patch/layer joint with undesirable interface defect (gap) at dynamic time harmonic mechanical and electrical load and environmental conditions. In order to find the axial displacements, shear stress and interface debond length in a closed analytical form for different positions of the interface gap, the 1D modified shear lag analysis is used. The debond length is represented as a function of many parameters (frequency, magnitude, electric displacement, moisture and temperature, joint geometry, position of the gap along the interface, etc.). Then the Genetic algorithm (GA) is implemented to find this position of the gap along the interface at which a vanishing/minimal debond length is ensured, e.g to find the most harmless position for the safe work of the structure. The illustrative example clearly shows that analytical shear-lag solutions and GA method can be combined successfully to give an effective prognosis of interface shear stress and interface delamination in patch/layer structure at combined loading with existing defects. To show the effect of the position of the interface gap, all obtained results are given in figures and discussed.Keywords: genetic algorithm, minimal delamination, optimal gap position, shear lag solution
Procedia PDF Downloads 3054124 Two Years Retrospective Study of Body Fluid Cultures Obtained from Patients in the Intensive Care Unit of General Hospital of Ioannina
Authors: N. Varsamis, M. Gerasimou, P. Christodoulou, S. Mantzoukis, G. Kolliopoulou, N. Zotos
Abstract:
Purpose: Body fluids (pleural, peritoneal, synovial, pericardial, cerebrospinal) are an important element in the detection of microorganisms. For this reason, it is important to examine them in the Intensive Care Unit (ICU) patients. Material and Method: Body fluids are transported through sterile containers and enriched as soon as possible with Tryptic Soy Broth (TSB). After one day of incubation, the broth is poured into selective media: Blood, Mac Conkey No. 2, Chocolate, Mueller Hinton, Chapman and Saboureaud agar. The above selective media are incubated directly for 2 days. After this period, if any number of microbial colonies are detected, gram staining is performed. After that, the isolated organisms are identified by biochemical techniques in the automated Microscan system (Siemens) and followed by a sensitivity test on the same system using the minimum inhibitory concentration MIC technique. The sensitivity test is verified by Kirby Bauer-based plate test. Results: In 2017 the Laboratory of Microbiology received 60 samples of body fluids from the ICU. More specifically the Microbiology Department received 6 peritoneal fluid specimens, 18 pleural fluid specimens and 36 cerebrospinal fluid specimens. 36 positive cultures were tested. S. epidermidis was identified in 18 specimens, S. haemolyticus in 6, and E. faecium in 12. Conclusions: The results show low detection of microorganisms in body fluid cultures.Keywords: body fluids, culture, intensive care unit, microorganisms
Procedia PDF Downloads 2054123 Testing Chat-GPT: An AI Application
Authors: Jana Ismail, Layla Fallatah, Maha Alshmaisi
Abstract:
ChatGPT, a cutting-edge language model built on the GPT-3.5 architecture, has garnered attention for its profound natural language processing capabilities, holding promise for transformative applications in customer service and content creation. This study delves into ChatGPT's architecture, aiming to comprehensively understand its strengths and potential limitations. Through systematic experiments across diverse domains, such as general knowledge and creative writing, we evaluated the model's coherence, context retention, and task-specific accuracy. While ChatGPT excels in generating human-like responses and demonstrates adaptability, occasional inaccuracies and sensitivity to input phrasing were observed. The study emphasizes the impact of prompt design on output quality, providing valuable insights for the nuanced deployment of ChatGPT in conversational AI and contributing to the ongoing discourse on the evolving landscape of natural language processing in artificial intelligence.Keywords: artificial Inelegance, chatGPT, open AI, NLP
Procedia PDF Downloads 814122 Visual Inspection of Road Conditions Using Deep Convolutional Neural Networks
Authors: Christos Theoharatos, Dimitris Tsourounis, Spiros Oikonomou, Andreas Makedonas
Abstract:
This paper focuses on the problem of visually inspecting and recognizing the road conditions in front of moving vehicles, targeting automotive scenarios. The goal of road inspection is to identify whether the road is slippery or not, as well as to detect possible anomalies on the road surface like potholes or body bumps/humps. Our work is based on an artificial intelligence methodology for real-time monitoring of road conditions in autonomous driving scenarios, using state-of-the-art deep convolutional neural network (CNN) techniques. Initially, the road and ego lane are segmented within the field of view of the camera that is integrated into the front part of the vehicle. A novel classification CNN is utilized to identify among plain and slippery road textures (e.g., wet, snow, etc.). Simultaneously, a robust detection CNN identifies severe surface anomalies within the ego lane, such as potholes and speed bumps/humps, within a distance of 5 to 25 meters. The overall methodology is illustrated under the scope of an integrated application (or system), which can be integrated into complete Advanced Driver-Assistance Systems (ADAS) systems that provide a full range of functionalities. The outcome of the proposed techniques present state-of-the-art detection and classification results and real-time performance running on AI accelerator devices like Intel’s Myriad 2/X Vision Processing Unit (VPU).Keywords: deep learning, convolutional neural networks, road condition classification, embedded systems
Procedia PDF Downloads 1374121 Post Growth Annealing Effect on Deep Level Emission and Raman Spectra of Hydrothermally Grown ZnO Nanorods Assisted by KMnO4
Authors: Ashish Kumar, Tejendra Dixit, I. A. Palani, Vipul Singh
Abstract:
Zinc oxide, with its interesting properties such as large band gap (3.37eV), high exciton binding energy (60 meV) and intense UV absorption has been studied in literature for various applications viz. optoelectronics, biosensors, UV-photodetectors etc. The performance of ZnO devices is highly influenced by morphologies, size, crystallinity of the ZnO active layer and processing conditions. Recently, our group has shown the influence of the in situ addition of KMnO4 in the precursor solution during the hydrothermal growth of ZnO nanorods (NRs) on their near band edge (NBE) emission. In this paper, we have investigated the effect of post-growth annealing on the variations in NBE and deep level (DL) emissions of as grown ZnO nanorods. These observed results have been explained on the basis of X-ray Diffraction (XRD) and Raman spectroscopic analysis, which clearly show that improved crystalinity and quantum confinement in ZnO nanorods.Keywords: ZnO, nanorods, hydrothermal, KMnO4
Procedia PDF Downloads 4054120 GPU-Accelerated Triangle Mesh Simplification Using Parallel Vertex Removal
Authors: Thomas Odaker, Dieter Kranzlmueller, Jens Volkert
Abstract:
We present an approach to triangle mesh simplification designed to be executed on the GPU. We use a quadric error metric to calculate an error value for each vertex of the mesh and order all vertices based on this value. This step is followed by the parallel removal of a number of vertices with the lowest calculated error values. To allow for the parallel removal of multiple vertices we use a set of per-vertex boundaries that prevent mesh foldovers even when simplification operations are performed on neighbouring vertices. We execute multiple iterations of the calculation of the vertex errors, ordering of the error values and removal of vertices until either a desired number of vertices remains in the mesh or a minimum error value is reached. This parallel approach is used to speed up the simplification process while maintaining mesh topology and avoiding foldovers at every step of the simplification.Keywords: computer graphics, half edge collapse, mesh simplification, precomputed simplification, topology preserving
Procedia PDF Downloads 3694119 Secure Message Transmission Using Meaningful Shares
Authors: Ajish Sreedharan
Abstract:
Visual cryptography encodes a secret image into shares of random binary patterns. If the shares are exerted onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the shares, however, have no visual meaning and hinder the objectives of visual cryptography. In the Secret Message Transmission through Meaningful Shares a secret message to be transmitted is converted to grey scale image. Then (2,2) visual cryptographic shares are generated from this converted gray scale image. The shares are encrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform. Two separate color images which are of the same size of the shares, taken as cover image of the respective shares to hide the shares into them. The encrypted shares which are covered by meaningful images so that a potential eavesdropper wont know there is a message to be read. The meaningful shares are transmitted through two different transmission medium. During decoding shares are fetched from received meaningful images and decrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform. The shares are combined to regenerate the grey scale image from where the secret message is obtained.Keywords: visual cryptography, wavelet transform, meaningful shares, grey scale image
Procedia PDF Downloads 4614118 Forecasting Optimal Production Program Using Profitability Optimization by Genetic Algorithm and Neural Network
Authors: Galal H. Senussi, Muamar Benisa, Sanja Vasin
Abstract:
In our business field today, one of the most important issues for any enterprises is cost minimization and profit maximization. Second issue is how to develop a strong and capable model that is able to give us desired forecasting of these two issues. Many researches deal with these issues using different methods. In this study, we developed a model for multi-criteria production program optimization, integrated with Artificial Neural Network. The prediction of the production cost and profit per unit of a product, dealing with two obverse functions at same time can be extremely difficult, especially if there is a great amount of conflict information about production parameters. Feed-Forward Neural Networks are suitable for generalization, which means that the network will generate a proper output as a result to input it has never seen. Therefore, with small set of examples the network will adjust its weight coefficients so the input will generate a proper output. This essential characteristic is of the most important abilities enabling this network to be used in variety of problems spreading from engineering to finance etc. From our results as we will see later, Feed-Forward Neural Networks has a strong ability and capability to map inputs into desired outputs.Keywords: project profitability, multi-objective optimization, genetic algorithm, Pareto set, neural networks
Procedia PDF Downloads 4484117 Advancements in Electronic Sensor Technologies for Tea Quality Evaluation
Authors: Raana Babadi Fathipour
Abstract:
Tea, second only to water in global consumption rates, holds a significant place as the beverage of choice for many around the world. The process of fermenting tea leaves plays a crucial role in determining its ultimate quality, traditionally assessed through meticulous observation by tea tasters and laboratory analysis. However, advancements in technology have paved the way for innovative electronic sensing platforms like the electronic nose (e-nose), electronic tongue (e-tongue), and electronic eye (e-eye). These cutting-edge tools, coupled with sophisticated data processing algorithms, not only expedite the assessment of tea's sensory qualities based on consumer preferences but also establish new benchmarks for this esteemed bioactive product to meet burgeoning market demands worldwide. By harnessing intricate data sets derived from electronic signals and deploying multivariate statistical techniques, these technological marvels can enhance accuracy in predicting and distinguishing tea quality with unparalleled precision. In this contemporary exploration, a comprehensive overview is provided of the most recent breakthroughs and viable solutions aimed at addressing forthcoming challenges in the realm of tea analysis. Utilizing bio-mimicking Electronic Sensory Perception systems (ESPs), researchers have developed innovative technologies that enable precise and instantaneous evaluation of the sensory-chemical attributes inherent in tea and its derivatives. These sophisticated sensing mechanisms are adept at deciphering key elements such as aroma, taste, and color profiles, transitioning valuable data into intricate mathematical algorithms for classification purposes. Through their adept capabilities, these cutting-edge devices exhibit remarkable proficiency in discerning various teas with respect to their distinct pricing structures, geographic origins, harvest epochs, fermentation processes, storage durations, quality classifications, and potential adulteration levels. While voltammetric and fluorescent sensor arrays have emerged as promising tools for constructing electronic tongue systems proficient in scrutinizing tea compositions, potentiometric electrodes continue to serve as reliable instruments for meticulously monitoring taste dynamics within different tea varieties. By implementing a feature-level fusion strategy within predictive models, marked enhancements can be achieved regarding efficiency and accuracy levels. Moreover, by establishing intrinsic linkages through pattern recognition methodologies between sensory traits and biochemical makeup found within tea samples, further strides are made toward enhancing our understanding of this venerable beverage's complex nature.Keywords: classifier system, tea, polyphenol, sensor, taste sensor
Procedia PDF Downloads 104116 Vehicle Gearbox Fault Diagnosis Based on Cepstrum Analysis
Authors: Mohamed El Morsy, Gabriela Achtenová
Abstract:
Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs. This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of cepstrum analysis in detection and diagnosis of the gear condition.Keywords: cepstrum analysis, fault diagnosis, gearbox, vibration signals
Procedia PDF Downloads 3874115 Convolutional Neural Networks versus Radiomic Analysis for Classification of Breast Mammogram
Authors: Mehwish Asghar
Abstract:
Breast Cancer (BC) is a common type of cancer among women. Its screening is usually performed using different imaging modalities such as magnetic resonance imaging, mammogram, X-ray, CT, etc. Among these modalities’ mammogram is considered a powerful tool for diagnosis and screening of breast cancer. Sophisticated machine learning approaches have shown promising results in complementing human diagnosis. Generally, machine learning methods can be divided into two major classes: one is Radiomics analysis (RA), where image features are extracted manually; and the other one is the concept of convolutional neural networks (CNN), in which the computer learns to recognize image features on its own. This research aims to improve the incidence of early detection, thus reducing the mortality rate caused by breast cancer through the latest advancements in computer science, in general, and machine learning, in particular. It has also been aimed to ease the burden of doctors by improving and automating the process of breast cancer detection. This research is related to a relative analysis of different techniques for the implementation of different models for detecting and classifying breast cancer. The main goal of this research is to provide a detailed view of results and performances between different techniques. The purpose of this paper is to explore the potential of a convolutional neural network (CNN) w.r.t feature extractor and as a classifier. Also, in this research, it has been aimed to add the module of Radiomics for comparison of its results with deep learning techniques.Keywords: breast cancer (BC), machine learning (ML), convolutional neural network (CNN), radionics, magnetic resonance imaging, artificial intelligence
Procedia PDF Downloads 2304114 Development of Pothole Management Method Using Automated Equipment with Multi-Beam Sensor
Authors: Sungho Kim, Jaechoul Shin, Yujin Baek, Nakseok Kim, Kyungnam Kim, Shinhaeng Jo
Abstract:
The climate change and increase in heavy traffic have been accelerating damages that cause the problems such as pothole on asphalt pavement. Pothole causes traffic accidents, vehicle damages, road casualties and traffic congestion. A quick and efficient maintenance method is needed because pothole is caused by stripping and accelerates pavement distress. In this study, we propose a rapid and systematic pothole management by developing a pothole automated repairing equipment including a volume measurement system of pothole. Three kinds of cold mix asphalt mixture were investigated to select repair materials. The materials were evaluated for satisfaction with quality standard and applicability to automated equipment. The volume measurement system of potholes was composed of multi-sensor that are combined with laser sensor and ultrasonic sensor and installed in front and side of the automated repair equipment. An algorithm was proposed to calculate the amount of repair material according to the measured pothole volume, and the system for releasing the correct amount of material was developed. Field test results showed that the loss of repair material amount could be reduced from approximately 20% to 6% per one point of pothole. Pothole rapid automated repair equipment will contribute to improvement on quality and efficient and economical maintenance by not only reducing materials and resources but also calculating appropriate materials. Through field application, it is possible to improve the accuracy of pothole volume measurement, to correct the calculation of material amount, and to manage the pothole data of roads, thereby enabling more efficient pavement maintenance management. Acknowledgment: The author would like to thank the MOLIT(Ministry of Land, Infrastructure, and Transport). This work was carried out through the project funded by the MOLIT. The project name is 'development of 20mm grade for road surface detecting roadway condition and rapid detection automation system for removal of pothole'.Keywords: automated equipment, management, multi-beam sensor, pothole
Procedia PDF Downloads 2254113 GA3C for Anomalous Radiation Source Detection
Authors: Chia-Yi Liu, Bo-Bin Xiao, Wen-Bin Lin, Hsiang-Ning Wu, Liang-Hsun Huang
Abstract:
In order to reduce the risk of radiation damage that personnel may suffer during operations in the radiation environment, the use of automated guided vehicles to assist or replace on-site personnel in the radiation environment has become a key technology and has become an important trend. In this paper, we demonstrate our proof of concept for autonomous self-learning radiation source searcher in an unknown environment without a map. The research uses GPU version of Asynchronous Advantage Actor-Critic network (GA3C) of deep reinforcement learning to search for radiation sources. The searcher network, based on GA3C architecture, has self-directed learned and improved how search the anomalous radiation source by training 1 million episodes under three simulation environments. In each episode of training, the radiation source position, the radiation source intensity, starting position, are all set randomly in one simulation environment. The input for searcher network is the fused data from a 2D laser scanner and a RGB-D camera as well as the value of the radiation detector. The output actions are the linear and angular velocities. The searcher network is trained in a simulation environment to accelerate the learning process. The well-performance searcher network is deployed to the real unmanned vehicle, Dashgo E2, which mounts LIDAR of YDLIDAR G4, RGB-D camera of Intel D455, and radiation detector made by Institute of Nuclear Energy Research. In the field experiment, the unmanned vehicle is enable to search out the radiation source of the 18.5MBq Na-22 by itself and avoid obstacles simultaneously without human interference.Keywords: deep reinforcement learning, GA3C, source searching, source detection
Procedia PDF Downloads 1194112 Model-Based Diagnostics of Multiple Tooth Cracks in Spur Gears
Authors: Ahmed Saeed Mohamed, Sadok Sassi, Mohammad Roshun Paurobally
Abstract:
Gears are important machine components that are widely used to transmit power and change speed in many rotating machines. Any breakdown of these vital components may cause severe disturbance to production and incur heavy financial losses. One of the most common causes of gear failure is the tooth fatigue crack. Early detection of teeth cracks is still a challenging task for engineers and maintenance personnel. So far, to analyze the vibration behavior of gears, different approaches have been tried based on theoretical developments, numerical simulations, or experimental investigations. The objective of this study was to develop a numerical model that could be used to simulate the effect of teeth cracks on the resulting vibrations and hence to permit early fault detection for gear transmission systems. Unlike the majority of published papers, where only one single crack has been considered, this work is more realistic, since it incorporates the possibility of multiple simultaneous cracks with different lengths. As cracks significantly alter the gear mesh stiffness, we performed a finite element analysis using SolidWorks software to determine the stiffness variation with respect to the angular position for different combinations of crack lengths. A simplified six degrees of freedom non-linear lumped parameter model of a one-stage gear system is proposed to study the vibration of a pair of spur gears, with and without tooth cracks. The model takes several physical properties into account, including variable gear mesh stiffness and the effect of friction, but ignores the lubrication effect. The vibration simulation results of the gearbox were obtained via Matlab and Simulink. The results were found to be consistent with the results from previously published works. The effect of one crack with different levels was studied and very similar changes in the total mesh stiffness and the vibration response, both were observed and compared to what has been found in previous studies. The effect of the crack length on various statistical time domain parameters was considered and the results show that these parameters were not equally sensitive to the crack percentage. Multiple cracks are introduced at different locations and the vibration response and the statistical parameters were obtained.Keywords: dynamic simulation, gear mesh stiffness, simultaneous tooth cracks, spur gear, vibration-based fault detection
Procedia PDF Downloads 2144111 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output
Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin
Abstract:
With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.Keywords: channel estimation, LMMSE, LS, MIMO, MMSE
Procedia PDF Downloads 1974110 Teachers and Innovations in Information and Communication Technology
Authors: Martina Manenova, Lukas Cirus
Abstract:
This article introduces research focused on elementary school teachers’ approach to innovations in ICT. The diffusion of innovations theory, which was written by E. M. Rogers, captures the processes of innovation adoption. The research method derived from this theory and the Rogers’ questionnaire focused on the diffusion of innovations was used as the basic research method. The research sample consisted of elementary school teachers. The comparison of results with the Rogers’ results shows that among the teachers in the research sample the so-called early majority, as well as the overall division of the data, was rather central (early adopter, early majority, and later majority). The teachers very rarely appeared on the edge positions (innovator, laggard). The obtained results can be applied to teaching practice and used especially in the implementation of new technologies and techniques into the educational process.Keywords: innovation, diffusion of innovation, information and communication technology, teachers
Procedia PDF Downloads 2964109 Logical-Probabilistic Modeling of the Reliability of Complex Systems
Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia
Abstract:
The paper presents logical-probabilistic methods, models and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of weights of elements. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research and designing of optimal structure systems are carried out.Keywords: Complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability, weight of element
Procedia PDF Downloads 774108 A New Intelligent, Dynamic and Real Time Management System of Sewerage
Authors: R. Tlili Yaakoubi, H.Nakouri, O. Blanpain, S. Lallahem
Abstract:
The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of this project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 19 to 100 %. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 40 % of total volume rejected to the natural environment and of 65 % in the number of discharges.Keywords: automation, optimization, paradigm, RTC
Procedia PDF Downloads 3024107 E-Learning in Life-Long Learning: Best Practices from the University of the Aegean
Authors: Chryssi Vitsilaki, Apostolos Kostas, Ilias Efthymiou
Abstract:
This paper presents selected best practices on online learning and teaching derived from a novel and innovating Lifelong Learning program through e-Learning, which has during the last five years been set up at the University of the Aegean in Greece. The university, capitalizing on an award-winning, decade-long experience in e-learning and blended learning in undergraduate and postgraduate studies, recently expanded into continuous education and vocational training programs in various cutting-edge fields. So, in this article we present: (a) the academic structure/infrastructure which has been developed for the administrative, organizational and educational support of the e-Learning process, including training the trainers, (b) the mode of design and implementation based on a sound pedagogical framework of open and distance education, and (c) the key results of the assessment of the e-learning process by the participants, as they are used to feedback on continuous organizational and teaching improvement and quality control.Keywords: distance education, e-learning, life-long programs, synchronous/asynchronous learning
Procedia PDF Downloads 3374106 Applying of an Adaptive Neuro-Fuzzy Inference System (ANFIS) for Estimation of Flood Hydrographs
Authors: Amir Ahmad Dehghani, Morteza Nabizadeh
Abstract:
This paper presents the application of an Adaptive Neuro-Fuzzy Inference System (ANFIS) to flood hydrograph modeling of Shahid Rajaee reservoir dam located in Iran. This was carried out using 11 flood hydrographs recorded in Tajan river gauging station. From this dataset, 9 flood hydrographs were chosen to train the model and 2 flood hydrographs to test the model. The different architectures of neuro-fuzzy model according to the membership function and learning algorithm were designed and trained with different epochs. The results were evaluated in comparison with the observed hydrographs and the best structure of model was chosen according the least RMSE in each performance. To evaluate the efficiency of neuro-fuzzy model, various statistical indices such as Nash-Sutcliff and flood peak discharge error criteria were calculated. In this simulation, the coordinates of a flood hydrograph including peak discharge were estimated using the discharge values occurred in the earlier time steps as input values to the neuro-fuzzy model. These results indicate the satisfactory efficiency of neuro-fuzzy model for flood simulating. This performance of the model demonstrates the suitability of the implemented approach to flood management projects.Keywords: adaptive neuro-fuzzy inference system, flood hydrograph, hybrid learning algorithm, Shahid Rajaee reservoir dam
Procedia PDF Downloads 4834105 Optimization and Automation of Functional Testing with White-Box Testing Method
Authors: Reyhaneh Soltanshah, Hamid R. Zarandi
Abstract:
In order to be more efficient in industries that are related to computer systems, software testing is necessary despite spending time and money. In the embedded system software test, complete knowledge of the embedded system architecture is necessary to avoid significant costs and damages. Software tests increase the price of the final product. The aim of this article is to provide a method to reduce time and cost in tests based on program structure. First, a complete review of eleven white box test methods based on ISO/IEC/IEEE 29119 2015 and 2021 versions has been done. The proposed algorithm is designed using two versions of the 29119 standards, and some white-box testing methods that are expensive or have little coverage have been removed. On each of the functions, white box test methods were applied according to the 29119 standard and then the proposed algorithm was implemented on the functions. To speed up the implementation of the proposed method, the Unity framework has been used with some changes. Unity framework can be used in embedded software testing due to its open source and ability to implement white box test methods. The test items obtained from these two approaches were evaluated using a mathematical ratio, which in various software mining reduced between 50% and 80% of the test cost and reached the desired result with the minimum number of test items.Keywords: embedded software, reduce costs, software testing, white-box testing
Procedia PDF Downloads 614104 PET Image Resolution Enhancement
Authors: Krzysztof Malczewski
Abstract:
PET is widely applied scanning procedure in medical imaging based research. It delivers measurements of functioning in distinct areas of the human brain while the patient is comfortable, conscious and alert. This article presents the new compression sensing based super-resolution algorithm for improving the image resolution in clinical Positron Emission Tomography (PET) scanners. The issue of motion artifacts is well known in Positron Emission Tomography (PET) studies as its side effect. The PET images are being acquired over a limited period of time. As the patients cannot hold breath during the PET data gathering, spatial blurring and motion artefacts are the usual result. These may lead to wrong diagnosis. It is shown that the presented approach improves PET spatial resolution in cases when Compressed Sensing (CS) sequences are used. Compressed Sensing (CS) aims at signal and images reconstructing from significantly fewer measurements than were traditionally thought necessary. The application of CS to PET has the potential for significant scan time reductions, with visible benefits for patients and health care economics. In this study the goal is to combine super-resolution image enhancement algorithm with CS framework to achieve high resolution PET output. Both methods emphasize on maximizing image sparsity on known sparse transform domain and minimizing fidelity.Keywords: PET, super-resolution, image reconstruction, pattern recognition
Procedia PDF Downloads 377