Search results for: probabilistic classification vector machines
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3864

Search results for: probabilistic classification vector machines

2124 Generating Synthetic Chest X-ray Images for Improved COVID-19 Detection Using Generative Adversarial Networks

Authors: Muneeb Ullah, Daishihan, Xiadong Young

Abstract:

Deep learning plays a crucial role in identifying COVID-19 and preventing its spread. To improve the accuracy of COVID-19 diagnoses, it is important to have access to a sufficient number of training images of CXRs (chest X-rays) depicting the disease. However, there is currently a shortage of such images. To address this issue, this paper introduces COVID-19 GAN, a model that uses generative adversarial networks (GANs) to generate realistic CXR images of COVID-19, which can be used to train identification models. Initially, a generator model is created that uses digressive channels to generate images of CXR scans for COVID-19. To differentiate between real and fake disease images, an efficient discriminator is developed by combining the dense connectivity strategy and instance normalization. This approach makes use of their feature extraction capabilities on CXR hazy areas. Lastly, the deep regret gradient penalty technique is utilized to ensure stable training of the model. With the use of 4,062 grape leaf disease images, the Leaf GAN model successfully produces 8,124 COVID-19 CXR images. The COVID-19 GAN model produces COVID-19 CXR images that outperform DCGAN and WGAN in terms of the Fréchet inception distance. Experimental findings suggest that the COVID-19 GAN-generated CXR images possess noticeable haziness, offering a promising approach to address the limited training data available for COVID-19 model training. When the dataset was expanded, CNN-based classification models outperformed other models, yielding higher accuracy rates than those of the initial dataset and other augmentation techniques. Among these models, ImagNet exhibited the best recognition accuracy of 99.70% on the testing set. These findings suggest that the proposed augmentation method is a solution to address overfitting issues in disease identification and can enhance identification accuracy effectively.

Keywords: classification, deep learning, medical images, CXR, GAN.

Procedia PDF Downloads 96
2123 Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features

Authors: Vesna Kirandziska, Nevena Ackovska, Ana Madevska Bogdanova

Abstract:

The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued.

Keywords: emotion recognition, facial recognition, signal processing, machine learning

Procedia PDF Downloads 316
2122 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction

Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun

Abstract:

The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.

Keywords: usability, qualitative data, text-processing algorithm, natural language processing

Procedia PDF Downloads 285
2121 Better Defined WHO International Classification of Disease Codes for Relapsing Fever Borreliosis, and Lyme Disease Education Aiding Diagnosis, Treatment Improving Human Right to Health

Authors: Mualla McManus, Jenna Luche Thaye

Abstract:

World Health Organisation International Classification of Disease codes were created to define disease including infections in order to guide and educate diagnosticians. Most infectious diseases such as syphilis are clearly defined by their ICD 10 codes and aid/help to educate the clinicians in syphilis diagnosis and treatment globally. However, current ICD 10 codes for relapsing fever Borreliosis and Lyme disease are less clearly defined and can impede appropriate diagnosis especially if the clinician is not familiar with the symptoms of these infectious diseases. This is despite substantial number of scientific articles published in peer-reviewed journals about relapsing fever and Lyme disease. In the USA there are estimated 380,000 people annually contacting Lyme disease, more cases than breast cancer and 6x HIV/AIDS cases. This represents estimated 0.09% of the USA population. If extrapolated to the global population (7billion), 0.09% equates to 63 million people contracting relapsing fever or Lyme disease. In many regions, the rate of contracting some form of infection from tick bite may be even higher. Without accurate and appropriate diagnostic codes, physicians are impeded in their ability to properly care for their patients, leaving those patients invisible and marginalized within the medical system and to those guiding public policy. This results in great personal hardship, pain, disability, and expense. This unnecessarily burdens health care systems, governments, families, and society as a whole. With accurate diagnostic codes in place, robust data can guide medical and public health research, health policy, track mortality and save health care dollars. Better defined ICD codes are the way forward in educating the diagnosticians about relapsing fever and Lyme diseases.

Keywords: WHO ICD codes, relapsing fever, Lyme diseases, World Health Organisation

Procedia PDF Downloads 193
2120 The Challenge of Navigating Long Tunnels

Authors: Ali Mohammadi

Abstract:

One of the concerns that employers and contractors have in creating long tunnels is that when the excavation is completed, the tunnel will be exited in the correct position according to designed, the deviation of the tunnel from its path can have many costs for the employer and the contractor, lack of correct calculations by the surveying engineer or the employer and contractors lack of importance to the surveying team in guiding the tunnel can cause the tunnel to deviate from its path and this deviation becomes a disaster. But employers are able to make the right decisions so that the tunnel is guided with the highest precision if they consider some points. We are investigating two tunnels with lengths of 12 and 18 kilometers that were dug by Tunnel boring machine machines to transfer water, how the contractor’s decision to control the 12 kilometer tunnel caused the most accuracy of one centimeter to the next part of the tunnel will be connected. We will also investigate the reasons for the deviation of axis in the 18 km tunnel about 20 meters. Also we review the calculations of surveyor engineers in both tunnels and what challenges there will be in the calculations and teach how to solve these challenges. Surveying calculations are the most important part in controlling long tunnels.

Keywords: UTM, localization, scale factor, traverse

Procedia PDF Downloads 76
2119 The Evolution of Moral Politics: Analysis on Moral Foundations of Korean Parties

Authors: Changdong Oh

Abstract:

With the arrival of post-industrial society, social scientists have been giving attention to issues of which factors shape cleavage of political parties. Especially, there is a heated controversy over whether and how social and cultural values influence the identities of parties and voting behavior. Drawing from Moral Foundations Theory (MFT), which approached similar issues by considering the effect of five moral foundations on political decision-making of people, this study investigates the role of moral rhetoric in the evolution of Korean political parties. Researcher collected official announcements released by the major two parties (Democratic Party of Korea, Saenuri Party) from 2007 to 2016, and analyzed the data by using Word2Vec algorithm and Moral Foundations Dictionary. Five moral decision modules of MFT, composed of care, fairness (individualistic morality), loyalty, authority and sanctity (group-based, Durkheimian morality), can be represented in vector spaces consisted of party announcements data. By comparing the party vector and the five morality vectors, researcher can see how the political parties have actively used each of the five moral foundations to express themselves and the opposition. Results report that the conservative party tends to actively draw on collective morality such as loyalty, authority, purity to differentiate itself. Notably, such moral differentiation strategy is prevalent when they criticize an opposition party. In contrast, the liberal party tends to concern with individualistic morality such as fairness. This result indicates that moral cleavage does exist between parties in South Korea. Furthermore, individualistic moral gaps of the two political parties are eased over time, which seems to be due to the discussion of economic democratization of conservative party that emerged after 2012, but the community-related moral gaps widened. These results imply that past political cleavages related to economic interests are diminishing and replaced by cultural and social values associated with communitarian morality. However, since the conservative party’s differentiation strategy is largely related to negative campaigns, it is doubtful whether such moral differentiation among political parties can contribute to the long-term party identification of the voters, thus further research is needed to determine it is sustainable. Despite the limitations, this study makes it possible to track and identify the moral changes of party system through automated text analysis. More generally, this study could contribute to the analysis of various texts associated with the moral foundation and finding a distributed representation of moral, ethical values.

Keywords: moral foundations theory, moral politics, party system, Word2Vec

Procedia PDF Downloads 362
2118 Its about Cortana, Microsoft’s Virtual Assistant

Authors: Aya Idriss, Esraa Othman, Lujain Malak

Abstract:

Artificial intelligence is the emulation of human intelligence processes by machines, particularly computer systems that act logically. Some of the specific applications of AI include natural language processing, speech recognition, and machine vision. Cortana is a virtual assistant and she’s an example of an AI Application. Microsoft made it possible for this app to be accessed not only on laptops and PCs but can be downloaded on mobile phones and used as a virtual assistant which was a huge success. Cortana can offer a lot apart from the basic orders such as setting alarms and marking the calendar. Its capabilities spread past that, for example, it provides us with listening to music and podcasts on the go, managing my to-do list and emails, connecting with my contacts hands-free by simply just telling the virtual assistant to call somebody, gives me instant answers and so on. A questionnaire was sent online to numerous friends and family members to perform the study, which is critical in evaluating Cortana's recognition capacity and the majority of the answers were in favor of Cortana’s capabilities. The results of the questionnaire assisted us in determining the level of Cortana's skills.

Keywords: artificial intelligence, Cortana, AI, abstract

Procedia PDF Downloads 177
2117 Transient Stability Improvement in Multi-Machine System Using Power System Stabilizer (PSS) and Static Var Compensator (SVC)

Authors: Khoshnaw Khalid Hama Saleh, Ergun Ercelebi

Abstract:

Increasingly complex modern power systems require stability, especially for transient and small disturbances. Transient stability plays a major role in stability during fault and large disturbance. This paper compares a power system stabilizer (PSS) and static Var compensator (SVC) to improve damping oscillation and enhance transient stability. The effectiveness of a PSS connected to the exciter and/or governor in damping electromechanical oscillations of isolated synchronous generator was tested. The SVC device is a member of the shunt FACTS (flexible alternating current transmission system) family, utilized in power transmission systems. The designed model was tested with a multi-machine system consisting of four machines six bus, using MATLAB/SIMULINK software. The results obtained indicate that SVC solutions are better than PSS.

Keywords: FACTS, MATLAB/SIMULINK, multi-machine system, PSS, SVC, transient stability

Procedia PDF Downloads 455
2116 Estimation and Comparison of Delay at Signalized Intersections Based on Existing Methods

Authors: Arpita Saha, Satish Chandra, Indrajit Ghosh

Abstract:

Delay implicates the time loss of a traveler while crossing an intersection. Efficiency of traffic operation at signalized intersections is assessed in terms of delay caused to an individual vehicle. Highway Capacity Manual (HCM) method and Webster’s method are the most widely used in India for delay estimation purpose. However, in India, traffic is highly heterogeneous in nature with extremely poor lane discipline. Therefore, to explore best delay estimation technique for Indian condition, a comparison was made. In this study, seven signalized intersections from three different cities where chosen. Data was collected for both during morning and evening peak hours. Only under saturated cycles were considered for this study. Delay was estimated based on the field data. With the help of Simpson’s 1/3 rd rule, delay of under saturated cycles was estimated by measuring the area under the curve of queue length and cycle time. Moreover, the field observed delay was compared with the delay estimated using HCM, Webster, Probabilistic, Taylor’s expansion and Regression methods. The drawbacks of the existing delay estimation methods to be use in Indian heterogeneous traffic conditions were figured out, and best method was proposed. It was observed that direct estimation of delay using field measured data is more accurate than existing conventional and modified methods.

Keywords: delay estimation technique, field delay, heterogeneous traffic, signalised intersection

Procedia PDF Downloads 301
2115 Landslide and Liquefaction Vulnerability Analysis Using Risk Assessment Analysis and Analytic Hierarchy Process Implication: Suitability of the New Capital of the Republic of Indonesia on Borneo Island

Authors: Rifaldy, Misbahudin, Khalid Rizky, Ricky Aryanto, M. Alfiyan Bagus, Fahri Septianto, Firman Najib Wibisana, Excobar Arman

Abstract:

Indonesia is a country that has a high level of disaster because it is on the ring of fire, and there are several regions with three major plates meeting in the world. So that disaster analysis must always be done to see the potential disasters that might always occur, especially in this research are landslides and liquefaction. This research was conducted to analyze areas that are vulnerable to landslides and liquefaction hazards and their relationship with the assessment of the issue of moving the new capital of the Republic of Indonesia to the island of Kalimantan with a total area of 612,267.22 km². The method in this analysis uses the Analytical Hierarchy Process and consistency ratio testing as a complex and unstructured problem-solving process into several parameters by providing values. The parameters used in this analysis are the slope, land cover, lithology distribution, wetness index, earthquake data, peak ground acceleration. Weighted overlay was carried out from all these parameters using the percentage value obtained from the Analytical Hierarchy Process and confirmed its accuracy with a consistency ratio so that a percentage of the area obtained with different vulnerability classification values was obtained. Based on the analysis results obtained vulnerability classification from very high to low vulnerability. There are (0.15%) 918.40083 km² of highly vulnerable, medium (20.75%) 127,045,44815 km², low (56.54%) 346,175.886188 km², very low (22.56%) 138,127.484832 km². This research is expected to be able to map landslides and liquefaction disasters on the island of Kalimantan and provide consideration of the suitability of regional development of the new capital of the Republic of Indonesia. Also, this research is expected to provide input or can be applied to all regions that are analyzing the vulnerability of landslides and liquefaction or the suitability of the development of certain regions.

Keywords: analytic hierarchy process, Borneo Island, landslide and liquefaction, vulnerability analysis

Procedia PDF Downloads 177
2114 A Comparative Study of Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV) for Airflow Measurement

Authors: Sijie Fu, Pascal-Henry Biwolé, Christian Mathis

Abstract:

Among modern airflow measurement methods, Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV), as visualized and non-instructive measurement techniques, are playing more important role. This paper conducts a comparative experimental study for airflow measurement employing both techniques with the same condition. Velocity vector fields, velocity contour fields, voticity profiles and turbulence profiles are selected as the comparison indexes. The results show that the performance of both PIV and PTV techniques for airflow measurement is satisfied, but some differences between the both techniques are existed, it suggests that selecting the measurement technique should be based on a comprehensive consideration.

Keywords: airflow measurement, comparison, PIV, PTV

Procedia PDF Downloads 424
2113 Studies on Influence of Rub on Vibration Signature of Rotating Machines

Authors: K. N. Umesh, K. S. Srinivasan

Abstract:

The influence of rotor rub was studied with respect to light rub and heavy rub conditions. The investigations were carried out for both below and above critical speeds. The time domain waveform has revealed truncation of the waveform during rubbing conditions. The quantum of rubbing has been indicated by the quantum of truncation. The orbits for light rub have indicated a single loop whereas for heavy rub multi looped orbits have been observed. In the heavy rub condition above critical speed both sub harmonics and super harmonics are exhibited. The orbit precess in a direction opposite to the direction of the rotation of the rotor. When the rubbing was created above the critical speed the orbit shape was of '8' shape indicating the rotor instability. Super-harmonics and sub-harmonics of vibration signals have been observed for light rub and heavy rub conditions and for speeds above critical.

Keywords: rotor rub, orbital analysis, frequency analysis, vibration signatures

Procedia PDF Downloads 313
2112 A Convolution Neural Network Approach to Predict Pes-Planus Using Plantar Pressure Mapping Images

Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi, Morvarid Lalenoor

Abstract:

Background: Plantar pressure distribution measurement has been used for a long time to assess foot disorders. Plantar pressure is an important component affecting the foot and ankle function and Changes in plantar pressure distribution could indicate various foot and ankle disorders. Morphologic and mechanical properties of the foot may be important factors affecting the plantar pressure distribution. Accurate and early measurement may help to reduce the prevalence of pes planus. With recent developments in technology, new techniques such as machine learning have been used to assist clinicians in predicting patients with foot disorders. Significance of the study: This study proposes a neural network learning-based flat foot classification methodology using static foot pressure distribution. Methodologies: Data were collected from 895 patients who were referred to a foot clinic due to foot disorders. Patients with pes planus were labeled by an experienced physician based on clinical examination. Then all subjects (with and without pes planus) were evaluated for static plantar pressures distribution. Patients who were diagnosed with the flat foot in both feet were included in the study. In the next step, the leg length was normalized and the network was trained for plantar pressure mapping images. Findings: From a total of 895 image data, 581 were labeled as pes planus. A computational neural network (CNN) ran to evaluate the performance of the proposed model. The prediction accuracy of the basic CNN-based model was performed and the prediction model was derived through the proposed methodology. In the basic CNN model, the training accuracy was 79.14%, and the test accuracy was 72.09%. Conclusion: This model can be easily and simply used by patients with pes planus and doctors to predict the classification of pes planus and prescreen for possible musculoskeletal disorders related to this condition. However, more models need to be considered and compared for higher accuracy.

Keywords: foot disorder, machine learning, neural network, pes planus

Procedia PDF Downloads 361
2111 Remote Sensing and Geographic Information Systems for Identifying Water Catchments Areas in the Northwest Coast of Egypt for Sustainable Agricultural Development

Authors: Mohamed Aboelghar, Ayman Abou Hadid, Usama Albehairy, Asmaa Khater

Abstract:

Sustainable agricultural development of the desert areas of Egypt under the pressure of irrigation water scarcity is a significant national challenge. Existing water harvesting techniques on the northwest coast of Egypt do not ensure the optimal use of rainfall for agricultural purposes. Basin-scale hydrology potentialities were studied to investigate how available annual rainfall could be used to increase agricultural production. All data related to agricultural production included in the form of geospatial layers. Thematic classification of Sentinal-2 imagery was carried out to produce the land cover and crop maps following the (FAO) system of land cover classification. Contour lines and spot height points were used to create a digital elevation model (DEM). Then, DEM was used to delineate basins, sub-basins, and water outlet points using the Soil and Water Assessment Tool (Arc SWAT). Main soil units of the study area identified from Land Master Plan maps. Climatic data collected from existing official sources. The amount of precipitation, surface water runoff, potential, and actual evapotranspiration for the years (2004 to 2017) shown as results of (Arc SWAT). The land cover map showed that the two tree crops (olive and fig) cover 195.8 km2 when herbaceous crops (barley and wheat) cover 154 km2. The maximum elevation was 250 meters above sea level when the lowest one was 3 meters below sea level. The study area receives a massive variable amount of precipitation; however, water harvesting methods are inappropriate to store water for purposes.

Keywords: water catchements, remote sensing, GIS, sustainable agricultural development

Procedia PDF Downloads 114
2110 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging

Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi

Abstract:

Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.

Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA

Procedia PDF Downloads 279
2109 Separate Powers Control Structure of DFIG Based on Fractional Regulator Fed by Multilevel Inverters DC Bus Voltages of a photovoltaic System

Authors: S. Ghoudelbourk, A. Omeiri, D. Dib, H. Cheghib

Abstract:

This paper shows that we can improve the performance of the auto-adjustable electric machines if a fractional dynamic is considered in the algorithm of the controlling order. This structure is particularly interested in the separate control of active and reactive power of the double-fed induction generator (DFIG) of wind power conversion chain. Fractional regulators are used in the regulation of chain of powers. Knowing that, usually, the source of DFIG is provided by converters through controlled rectifiers, all this system makes the currents of lines strongly polluted that can have a harmful effect for the connected loads and sensitive equipment nearby. The solution to overcome these problems is to replace the power of the rotor DFIG by multilevel inverters supplied by PV which improve the THD. The structure of the adopted adjustment is tested using Matlab/Simulink and the results are presented and analyzed for a variable wind.

Keywords: DFIG, fractional regulator, multilevel inverters, PV

Procedia PDF Downloads 401
2108 Fault Diagnosis of Squirrel-Cage Induction Motor by a Neural Network Multi-Models

Authors: Yahia. Kourd, N. Guersi D. Lefebvre

Abstract:

In this paper we propose to study the faults diagnosis in squirrel-cage induction motor using MLP neural networks. We use neural healthy and faulty models of the behavior in order to detect and isolate some faults in machine. In the first part of this work, we have created a neural model for the healthy state using Matlab and a motor located in LGEB by acquirins data inputs and outputs of this engine. Then we detected the faults in the machine by residual generation. These residuals are not sufficient to isolate the existing faults. For this reason, we proposed additive neural networks to represent the faulty behaviors. From the analysis of these residuals and the choice of a threshold we propose a method capable of performing the detection and diagnosis of some faults in asynchronous machines with squirrel cage rotor.

Keywords: faults diagnosis, neural networks, multi-models, squirrel-cage induction motor

Procedia PDF Downloads 636
2107 An Efficient Automated Radiation Measuring System for Plasma Monopole Antenna

Authors: Gurkirandeep Kaur, Rana Pratap Yadav

Abstract:

This experimental study is aimed to examine the radiation characteristics of different plasma structures of a surface wave-driven plasma antenna by an automated measuring system. In this study, a 30 cm long plasma column of argon gas with a diameter of 3 cm is excited by surface wave discharge mechanism operating at 13.56 MHz with RF power level up to 100 Watts and gas pressure between 0.01 to 0.05 mb. The study reveals that a single structured plasma monopole can be modified into an array of plasma antenna elements by forming multiple striations or plasma blobs inside the discharge tube by altering the values of plasma properties such as working pressure, operating frequency, input RF power, discharge tube dimensions, i.e., length, radius, and thickness. It is also reported that plasma length, electron density, and conductivity are functions of operating plasma parameters and controlled by changing working pressure and input power. To investigate the antenna radiation efficiency for the far-field region, an automation-based radiation measuring system has been fabricated and presented in detail. This developed automated system involves a combined setup of controller, dc servo motors, vector network analyzer, and computing device to evaluate the radiation intensity, directivity, gain and efficiency of plasma antenna. In this system, the controller is connected to multiple motors for moving aluminum shafts in both elevation and azimuthal plane whereas radiation from plasma monopole antenna is measured by a Vector Network Analyser (VNA) which is further wired up with the computing device to display radiations in polar plot forms. Here, the radiation characteristics of both continuous and array plasma monopole antenna have been studied for various working plasma parameters. The experimental results clearly indicate that the plasma antenna is as efficient as a metallic antenna. The radiation from plasma monopole antenna is significantly influenced by plasma properties which provides a wider range in radiation pattern where desired radiation parameters like beam-width, the direction of radiation, radiation intensity, antenna efficiency, etc. can be achieved in a single monopole. Due to its wide range of selectivity in radiation pattern; this can meet the demands of wider bandwidth to get high data speed in communication systems. Moreover, this developed system provides an efficient and cost-effective solution for measuring the radiation pattern in far-field zone for any kind of antenna system.

Keywords: antenna radiation characteristics, dynamically reconfigurable, plasma antenna, plasma column, plasma striations, surface wave

Procedia PDF Downloads 119
2106 Improving Overall Equipment Effectiveness of CNC-VMC by Implementing Kobetsu Kaizen

Authors: Nakul Agrawal, Y. M. Puri

Abstract:

TPM methodology is a proven approach to increase Overall Equipment Effectiveness (OEE) of machine. OEE is an established method to monitor and improve the effectiveness of manufacturing process. OEE is a product of equipment availability, performance efficiency and quality performance of manufacturing operations. The paper presents a project work for improving OEE of CNC-VMC in a manufacturing industry with the help of TPM tools Kaizen and Autonomous Maintenance. The aim of paper is to enhance OEE by minimizing the breakdown and re-work, increase availability, performance and quality. The calculated OEE of bottle necking machines for 4 months is lower of 53.3%. Root Cause Analysis RCA tools like fishbone diagram, Pareto chart are used for determining the reasons behind low OEE. While Tool like Why-Why analysis is use for determining the basis reasons for low OEE. Tools like Kaizen and Autonomous Maintenance are effectively implemented on CNC-VMC which eliminate the causes of breakdown and prevent from reoccurring. The result obtains from approach shows that OEE of CNC-VMC improved from 53.3% to 73.7% which saves an average sum of Rs.3, 19,000.

Keywords: OEE, TPM, Kaizen, CNC-VMC, why-why analysis, RCA

Procedia PDF Downloads 394
2105 Modeling and Simulation of Flow Shop Scheduling Problem through Petri Net Tools

Authors: Joselito Medina Marin, Norberto Hernández Romero, Juan Carlos Seck Tuoh Mora, Erick S. Martinez Gomez

Abstract:

The Flow Shop Scheduling Problem (FSSP) is a typical problem that is faced by production planning managers in Flexible Manufacturing Systems (FMS). This problem consists in finding the optimal scheduling to carry out a set of jobs, which are processed in a set of machines or shared resources. Moreover, all the jobs are processed in the same machine sequence. As in all the scheduling problems, the makespan can be obtained by drawing the Gantt chart according to the operations order, among other alternatives. On this way, an FMS presenting the FSSP can be modeled by Petri nets (PNs), which are a powerful tool that has been used to model and analyze discrete event systems. Then, the makespan can be obtained by simulating the PN through the token game animation and incidence matrix. In this work, we present an adaptive PN to obtain the makespan of FSSP by applying PN analytical tools.

Keywords: flow-shop scheduling problem, makespan, Petri nets, state equation

Procedia PDF Downloads 298
2104 A Hybrid Distributed Algorithm for Multi-Objective Dynamic Flexible Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk

Abstract:

In this paper, a hybrid distributed algorithm has been suggested for multi-objective dynamic flexible job shop scheduling problem. The proposed algorithm is high level, in which several algorithms search the space on different machines simultaneously also it is a hybrid algorithm that takes advantages of the artificial intelligence, evolutionary and optimization methods. Distribution is done at different levels and new approaches are used for design of the algorithm. Apache spark and Hadoop frameworks have been used for the distribution of the algorithm. The Pareto optimality approach is used for solving the multi-objective benchmarks. The suggested algorithm that is able to solve large-size problems in short times has been compared with the successful algorithms of the literature. The results prove high speed and efficiency of the algorithm.

Keywords: distributed algorithms, apache-spark, Hadoop, flexible dynamic job shop scheduling, multi-objective optimization

Procedia PDF Downloads 354
2103 Heuristic for Scheduling Correlated Parallel Machine to Minimize Maximum Lateness and Total Weighed Completion Time

Authors: Yang-Kuei Lin, Yun-Xi Zhang

Abstract:

This research focuses on the bicriteria correlated parallel machine scheduling problem. The two objective functions considered in this problem are to minimize maximum lateness and total weighted completion time. We first present a mixed integer programming (MIP) model that can find the entire efficient frontier for the studied problem. Next, we have proposed a bicriteria heuristic that can find non-dominated solutions for the studied problem. The performance of the proposed bicriteria heuristic is compared with the efficient frontier generated by solving the MIP model. Computational results indicate that the proposed bicriteria heuristic can solve the problem efficiently and find a set of diverse solutions that are uniformly distributed along the efficient frontier.

Keywords: bicriteria, correlated parallel machines, heuristic, scheduling

Procedia PDF Downloads 141
2102 Speech Emotion Recognition with Bi-GRU and Self-Attention based Feature Representation

Authors: Bubai Maji, Monorama Swain

Abstract:

Speech is considered an essential and most natural medium for the interaction between machines and humans. However, extracting effective features for speech emotion recognition (SER) is remains challenging. The present studies show that the temporal information captured but high-level temporal-feature learning is yet to be investigated. In this paper, we present an efficient novel method using the Self-attention (SA) mechanism in a combination of Convolutional Neural Network (CNN) and Bi-directional Gated Recurrent Unit (Bi-GRU) network to learn high-level temporal-feature. In order to further enhance the representation of the high-level temporal-feature, we integrate a Bi-GRU output with learnable weights features by SA, and improve the performance. We evaluate our proposed method on our created SITB-OSED and IEMOCAP databases. We report that the experimental results of our proposed method achieve state-of-the-art performance on both databases.

Keywords: Bi-GRU, 1D-CNNs, self-attention, speech emotion recognition

Procedia PDF Downloads 113
2101 Theoretical BER Analyzing of MPSK Signals Based on the Signal Space

Authors: Jing Qing-feng, Liu Danmei

Abstract:

Based on the optimum detection, signal projection and Maximum A Posteriori (MAP) rule, Proakis has deduced the theoretical BER equation of Gray coded MPSK signals. Proakis analyzed the BER theoretical equations mainly based on the projection of signals, which is difficult to be understood. This article solve the same problem based on the signal space, which explains the vectors relations among the sending signals, received signals and noises. The more explicit and easy-deduced process is illustrated in this article based on the signal space, which can illustrated the relations among the signals and noises clearly. This kind of deduction has a univocal geometry meaning. It can explain the correlation between the production and calculation of BER in vector level.

Keywords: MPSK, MAP, signal space, BER

Procedia PDF Downloads 346
2100 Evaluation of Environmental, Technical, and Economic Indicators of a Fused Deposition Modeling Process

Authors: M. Yosofi, S. Ezeddini, A. Ollivier, V. Lavaste, C. Mayousse

Abstract:

Additive manufacturing processes have changed significantly in a wide range of industries and their application progressed from rapid prototyping to production of end-use products. However, their environmental impact is still a rather open question. In order to support the growth of this technology in the industrial sector, environmental aspects should be considered and predictive models may help monitor and reduce the environmental footprint of the processes. This work presents predictive models based on a previously developed methodology for the environmental impact evaluation combined with a technical and economical assessment. Here we applied the methodology to the Fused Deposition Modeling process. First, we present the predictive models relative to different types of machines. Then, we present a decision-making tool designed to identify the optimum manufacturing strategy regarding technical, economic, and environmental criteria.

Keywords: additive manufacturing, decision-makings, environmental impact, predictive models

Procedia PDF Downloads 131
2099 A Secure System for Handling Information from Heterogeous Sources

Authors: Shoohira Aftab, Hammad Afzal

Abstract:

Information integration is a well known procedure to provide consolidated view on sets of heterogeneous information sources. It not only provides better statistical analysis of information but also facilitates users to query without any knowledge on the underlying heterogeneous information sources The problem of providing a consolidated view of information can be handled using Semantic data (information stored in such a way that is understandable by machines and integrate-able without manual human intervention). However, integrating information using semantic web technology without any access management enforced, will results in increase of privacy and confidentiality concerns. In this research we have designed and developed a framework that would allow information from heterogeneous formats to be consolidated, thus resolving the issue of interoperability. We have also devised an access control system for defining explicit privacy constraints. We designed and applied our framework on both semantic and non-semantic data from heterogeneous resources. Our approach is validated using scenario based testing.

Keywords: information integration, semantic data, interoperability, security, access control system

Procedia PDF Downloads 357
2098 Active Disturbance Rejection Control for Wind System Based on a DFIG

Authors: R. Chakib, A. Essadki, M. Cherkaoui

Abstract:

This paper proposes the study of a robust control of the doubly fed induction generator (DFIG) used in a wind energy production. The proposed control is based on the linear active disturbance rejection control (ADRC) and it is applied to the control currents rotor of the DFIG, the DC bus voltage and active and reactive power exchanged between the DFIG and the network. The system under study and the proposed control are simulated using MATLAB/SIMULINK.

Keywords: doubly fed induction generator (DFIG), active disturbance rejection control (ADRC), vector control, MPPT, extended state observer, back-to-back converter, wind turbine

Procedia PDF Downloads 488
2097 Evaluating Forecasting Strategies for Day-Ahead Electricity Prices: Insights From the Russia-Ukraine Crisis

Authors: Alexandra Papagianni, George Filis, Panagiotis Papadopoulos

Abstract:

The liberalization of the energy market and the increasing penetration of fluctuating renewables (e.g., wind and solar power) have heightened the importance of the spot market for ensuring efficient electricity supply. This is further emphasized by the EU’s goal of achieving net-zero emissions by 2050. The day-ahead market (DAM) plays a key role in European energy trading, accounting for 80-90% of spot transactions and providing critical insights for next-day pricing. Therefore, short-term electricity price forecasting (EPF) within the DAM is crucial for market participants to make informed decisions and improve their market positioning. Existing literature highlights out-of-sample performance as a key factor in assessing EPF accuracy, with influencing factors such as predictors, forecast horizon, model selection, and strategy. Several studies indicate that electricity demand is a primary price determinant, while renewable energy sources (RES) like wind and solar significantly impact price dynamics, often lowering prices. Additionally, incorporating data from neighboring countries, due to market coupling, further improves forecast accuracy. Most studies predict up to 24 steps ahead using hourly data, while some extend forecasts using higher-frequency data (e.g., half-hourly or quarter-hourly). Short-term EPF methods fall into two main categories: statistical and computational intelligence (CI) methods, with hybrid models combining both. While many studies use advanced statistical methods, particularly through different versions of traditional AR-type models, others apply computational techniques such as artificial neural networks (ANNs) and support vector machines (SVMs). Recent research combines multiple methods to enhance forecasting performance. Despite extensive research on EPF accuracy, a gap remains in understanding how forecasting strategy affects prediction outcomes. While iterated strategies are commonly used, they are often chosen without justification. This paper contributes by examining whether the choice of forecasting strategy impacts the quality of day-ahead price predictions, especially for multi-step forecasts. We evaluate both iterated and direct methods, exploring alternative ways of conducting iterated forecasts on benchmark and state-of-the-art forecasting frameworks. The goal is to assess whether these factors should be considered by end-users to improve forecast quality. We focus on the Greek DAM using data from July 1, 2021, to March 31, 2022. This period is chosen due to significant price volatility in Greece, driven by its dependence on natural gas and limited interconnection capacity with larger European grids. The analysis covers two phases: pre-conflict (January 1, 2022, to February 23, 2022) and post-conflict (February 24, 2022, to March 31, 2022), following the Russian-Ukraine conflict that initiated an energy crisis. We use the mean absolute percentage error (MAPE) and symmetric mean absolute percentage error (sMAPE) for evaluation, as well as the Direction of Change (DoC) measure to assess the accuracy of price movement predictions. Our findings suggest that forecasters need to apply all strategies across different horizons and models. Different strategies may be required for different horizons to optimize both accuracy and directional predictions, ensuring more reliable forecasts.

Keywords: short-term electricity price forecast, forecast strategies, forecast horizons, recursive strategy, direct strategy

Procedia PDF Downloads 8
2096 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 78
2095 Design and Performance of a Large Diameter Shaft in Old Alluvium

Authors: Tamilmani Thiruvengadam, Ramasthanan Arulampalam

Abstract:

This project comprises laying approximately 1.8km of 400mm, 1200mm and 2400mm diameter sewer pipes using pipe jacking machines along Mugliston Park, Buangkok Drive, and Buangkok Link. The works include an estimated 14 circular shafts with depth ranging from 10.0 meters to 29.0 meters. Cast in-situ circular shaft will be used for the temporary shaft excavation. The geology is predominantly Backfill and old alluvium with weak material encountered in between. Where there is a very soft clay, F1 material or weak soil is expected, ground improvement will be carried out outside of the shaft followed by cast in-situ concrete ring wall within the improved soil zone. This paper presents the design methodology, analysis and results of temporary shafts for micro TBM launching and constructing permanent manholes. There is also a comparison of instrumentation readings with the analysis predicted values.

Keywords: circular shaft, ground improvement, old alluvium, temporary shaft

Procedia PDF Downloads 287