Search results for: machine failures
1414 Territorial Analysis of the Public Transport Supply: Case Study of Recife City
Authors: Cláudia Alcoforado, Anabela Ribeiro
Abstract:
This paper is part of an ongoing PhD thesis. It seeks to develop a model to identify the spatial failures of the public transportation supply. In the construction of the model, it also seeks to detect the social needs arising from the disadvantage in transport. The case study is carried out for the Brazilian city of Recife. Currently, Recife has a population density of 7,039.64 inhabitants per km². Unfortunately, only 46.9% of urban households on public roads have adequate urbanization. Allied to this reality, the trend of the occupation of the poorest population is that of the peripheries, a fact that has been consolidated in Brazil and Latin America, thus burdening the families' income, since the greater the distances covered for the basic activities and consequently also the transport costs. In this way, there have been great impacts caused by the supply of public transportation to locations with low demand or lack of urban infrastructure. The model under construction uses methods such as Currie’s Gap Assessment associated with the London’s Public Transport Access Level, and the Public Transport Accessibility Index developed by Saghapour. It is intended to present the stage of the thesis with the spatial/need gaps of the neighborhoods of Recife already detected. The benefits of the geographic information system are used in this paper. It should be noted that gaps are determined from the transport supply indices. In this case, considering the presence of walking catchment areas. Still in relation to the detection of gaps, the relevant demand index is also determined. This, in turn, is calculated through indicators that reflect social needs. With the use of the smaller Brazilian geographical unit, the census sector, the model with the inclusion of population density in the study areas should present more consolidated results. Based on the results achieved, an analysis of transportation disadvantage will be carried out as a factor of social exclusion in the study area. It is anticipated that the results obtained up to the present moment, already indicate a strong trend of public transportation in areas of higher income classes, leading to the understanding that the most disadvantaged population migrates to those neighborhoods in search of employment.Keywords: gap assessment, public transport supply, social exclusion, spatial gaps
Procedia PDF Downloads 1821413 Study of Magnetic Properties on the Corrosion Behavior and Influence of Temperature in Permanent Magnet (Nd-Fe-B) Used in PMSM
Authors: N. Yogal, C. Lehrmann
Abstract:
The use of Permanent magnet (PM) is increasing in the Permanent magnet synchronous machines (PMSM) to fulfill the requirement of high efficiency machines in modern industry. PMSM is widely used in industrial application, wind power plant and automotive industry. Since the PMSM are used in different environment condition, the long-term effect of NdFeB-based magnets at high temperatures and corrosion behavior has to be studied due to irreversible loss of magnetic properties. In this paper, the effect of magnetic properties due to corrosion and increasing temperature in the climatic chamber has been presented. The magnetic moment and magnetic field of the magnet were studied experimentally.Keywords: permanent magnet (PM), NdFeB, corrosion behavior, temperature effect, Permanent magnet synchronous machine (PMSM)
Procedia PDF Downloads 3951412 Optimizing Pick and Place Operations in a Simulated Work Cell for Deformable 3D Objects
Authors: Troels Bo Jørgensen, Preben Hagh Strunge Holm, Henrik Gordon Petersen, Norbert Kruger
Abstract:
This paper presents a simulation framework for using machine learning techniques to determine robust robotic motions for handling deformable objects. The main focus is on applications in the meat sector, which mainly handle three-dimensional objects. In order to optimize the robotic handling, the robot motions have been parameterized in terms of grasp points, robot trajectory and robot speed. The motions are evaluated based on a dynamic simulation environment for robotic control of deformable objects. The evaluation indicates certain parameter setups, which produce robust motions in the simulated environment, and based on a visual analysis indicate satisfactory solutions for a real world system.Keywords: deformable objects, robotic manipulation, simulation, real world system
Procedia PDF Downloads 2811411 Performance Analysis and Optimization for Diagonal Sparse Matrix-Vector Multiplication on Machine Learning Unit
Authors: Qiuyu Dai, Haochong Zhang, Xiangrong Liu
Abstract:
Diagonal sparse matrix-vector multiplication is a well-studied topic in the fields of scientific computing and big data processing. However, when diagonal sparse matrices are stored in DIA format, there can be a significant number of padded zero elements and scattered points, which can lead to a degradation in the performance of the current DIA kernel. This can also lead to excessive consumption of computational and memory resources. In order to address these issues, the authors propose the DIA-Adaptive scheme and its kernel, which leverages the parallel instruction sets on MLU. The researchers analyze the effect of allocating a varying number of threads, clusters, and hardware architectures on the performance of SpMV using different formats. The experimental results indicate that the proposed DIA-Adaptive scheme performs well and offers excellent parallelism.Keywords: adaptive method, DIA, diagonal sparse matrices, MLU, sparse matrix-vector multiplication
Procedia PDF Downloads 1351410 A Review of Fractal Dimension Computing Methods Applied to Wear Particles
Authors: Manish Kumar Thakur, Subrata Kumar Ghosh
Abstract:
Various types of particles found in lubricant may be characterized by their fractal dimension. Some of the available methods are: yard-stick method or structured walk method, box-counting method. This paper presents a review of the developments and progress in fractal dimension computing methods as applied to characteristics the surface of wear particles. An overview of these methods, their implementation, their advantages and their limits is also present here. It has been accepted that wear particles contain major information about wear and friction of materials. Morphological analysis of wear particles from a lubricant is a very effective way for machine condition monitoring. Fractal dimension methods are used to characterize the morphology of the found particles. It is very useful in the analysis of complexity of irregular substance. The aim of this review is to bring together the fractal methods applicable for wear particles.Keywords: fractal dimension, morphological analysis, wear, wear particles
Procedia PDF Downloads 4901409 Comparison of Machine Learning and Deep Learning Algorithms for Automatic Classification of 80 Different Pollen Species
Authors: Endrick Barnacin, Jean-Luc Henry, Jimmy Nagau, Jack Molinie
Abstract:
Palynology is a field of interest in many disciplines due to its multiple applications: chronological dating, climatology, allergy treatment, and honey characterization. Unfortunately, the analysis of a pollen slide is a complicated and time consuming task that requires the intervention of experts in the field, which are becoming increasingly rare due to economic and social conditions. That is why the need for automation of this task is urgent. A lot of studies have investigated the subject using different standard image processing descriptors and sometimes hand-crafted ones.In this work, we make a comparative study between classical feature extraction methods (Shape, GLCM, LBP, and others) and Deep Learning (CNN, Autoencoders, Transfer Learning) to perform a recognition task over 80 regional pollen species. It has been found that the use of Transfer Learning seems to be more precise than the other approachesKeywords: pollens identification, features extraction, pollens classification, automated palynology
Procedia PDF Downloads 1361408 Safety-critical Alarming Strategy Based on Statistically Defined Slope Deformation Behaviour Model Case Study: Upright-dipping Highwall in a Coal Mining Area
Authors: Lintang Putra Sadewa, Ilham Prasetya Budhi
Abstract:
Slope monitoring program has now become a mandatory campaign for any open pit mines around the world to operate safely. Utilizing various slope monitoring instruments and strategies, miners are now able to deliver precise decisions in mitigating the risk of slope failures which can be catastrophic. Currently, the most sophisticated slope monitoring technology available is the Slope Stability Radar (SSR), whichcan measure wall deformation in submillimeter accuracy. One of its eminent features is that SSRcan provide a timely warning by automatically raise an alarm when a predetermined rate-of-movement threshold is reached. However, establishing proper alarm thresholds is arguably one of the onerous challenges faced in any slope monitoring program. The difficulty mainly lies in the number of considerations that must be taken when generating a threshold becausean alarm must be effectivethat it should limit the occurrences of false alarms while alsobeing able to capture any real wall deformations. In this sense, experience shows that a site-specific alarm thresholdtendsto produce more reliable results because it considers site distinctive variables. This study will attempt to determinealarming thresholds for safety-critical monitoring based on an empirical model of slope deformation behaviour that is defined statistically fromdeformation data captured by the Slope Stability Radar (SSR). The study area comprises of upright-dipping highwall setting in a coal mining area with intense mining activities, andthe deformation data used for the study were recorded by the SSR throughout the year 2022. The model is site-specific in nature thus, valuable information extracted from the model (e.g., time-to-failure, onset-of-acceleration, and velocity) will be applicable in setting up site-specific alarm thresholds and will give a clear understanding of how deformation trends evolve over the area.Keywords: safety-critical monitoring, alarming strategy, slope deformation behaviour model, coal mining
Procedia PDF Downloads 901407 Mixing Enhancement with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure Micromixer Using Different Mixing Fluids
Authors: Ayalew Yimam Ali
Abstract:
The T-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the T-junction microchannel can be difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The newly developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the T-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal, triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on the top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the T-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement.
Procedia PDF Downloads 201406 Using Combination of Sets of Features of Molecules for Aqueous Solubility Prediction: A Random Forest Model
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Generally, absorption and bioavailability increase if solubility increases; therefore, it is crucial to predict them in drug discovery applications. Molecular descriptors and Molecular properties are traditionally used for the prediction of water solubility. There are various key descriptors that are used for this purpose, namely Drogan Descriptors, Morgan Descriptors, Maccs keys, etc., and each has different prediction capabilities with differentiating successes between different data sets. Another source for the prediction of solubility is structural features; they are commonly used for the prediction of solubility. However, there are little to no studies that combine three or more properties or descriptors for prediction to produce a more powerful prediction model. Unlike available models, we used a combination of those features in a random forest machine learning model for improved solubility prediction to better predict and, therefore, contribute to drug discovery systems.Keywords: solubility, random forest, molecular descriptors, maccs keys
Procedia PDF Downloads 461405 Establishing Multi-Leveled Computability as a Living-System Evolutionary Context
Authors: Ron Cottam, Nils Langloh, Willy Ranson, Roger Vounckx
Abstract:
We start by formally describing the requirements for environmental-reaction survival computation in a natural temporally-demanding medium, and develop this into a more general model of the evolutionary context as a computational machine. The effect of this development is to replace deterministic logic by a modified form which exhibits a continuous range of dimensional fractal diffuseness between the isolation of perfectly ordered localization and the extended communication associated with nonlocality as represented by pure causal chaos. We investigate the appearance of life and consciousness in the derived general model, and propose a representation of Nature within which all localizations have the character of quasi-quantal entities. We compare our conclusions with Heisenberg’s uncertainty principle and nonlocal teleportation, and maintain that computability is the principal influence on evolution in the model we propose.Keywords: computability, evolution, life, localization, modeling, nonlocality
Procedia PDF Downloads 3991404 Reactive Power Control Strategy for Z-Source Inverter Based Reconfigurable Photovoltaic Microgrid Architectures
Authors: Reshan Perera, Sarith Munasinghe, Himali Lakshika, Yasith Perera, Hasitha Walakadawattage, Udayanga Hemapala
Abstract:
This research presents a reconfigurable architecture for residential microgrid systems utilizing Z-Source Inverter (ZSI) to optimize solar photovoltaic (SPV) system utilization and enhance grid resilience. The proposed system addresses challenges associated with high solar power penetration through various modes, including current control, voltage-frequency control, and reactive power control. It ensures uninterrupted power supply during grid faults, providing flexibility and reliability for grid-connected SPV customers. Challenges and opportunities in reactive power control for microgrids are explored, with simulation results and case studies validating proposed strategies. From a control and power perspective, the ZSI-based inverter enhances safety, reduces failures, and improves power quality compared to traditional inverters. Operating seamlessly in grid-connected and islanded modes guarantees continuous power supply during grid disturbances. Moreover, the research addresses power quality issues in long distribution feeders during off-peak and night-peak hours or fault conditions. Using the Distributed Static Synchronous Compensator (DSTATCOM) for voltage stability, the control objective is nighttime voltage regulation at the Point of Common Coupling (PCC). In this mode, disconnection of PV panels, batteries, and the battery controller allows the ZSI to operate in voltage-regulating mode, with critical loads remaining connected. The study introduces a structured controller for Reactive Power Controlling mode, contributing to a comprehensive and adaptable solution for residential microgrid systems. Mathematical modeling and simulations confirm successful maximum power extraction, controlled voltage, and smooth voltage-frequency regulation.Keywords: reconfigurable architecture, solar photovoltaic, microgrids, z-source inverter, STATCOM, power quality, battery storage system
Procedia PDF Downloads 91403 Free Fibular Flaps in Management of Sternal Dehiscence
Authors: H. N. Alyaseen, S. E. Alalawi, T. Cordoba, É. Delisle, C. Cordoba, A. Odobescu
Abstract:
Sternal dehiscence is defined as the persistent separation of sternal bones that are often complicated with mediastinitis. Etiologies that lead to sternal dehiscence vary, with cardiovascular and thoracic surgeries being the most common. Early diagnosis in susceptible patients is crucial to the management of such cases, as they are associated with high mortality rates. A recent meta-analysis of more than four hundred thousand patients concluded that deep sternal wound infections were the leading cause of mortality and morbidity in patients undergoing cardiac procedures. Long-term complications associated with sternal dehiscence include increased hospitalizations, cardiac infarctions, and renal and respiratory failures. Numerous osteosynthesis methods have been described in the literature. Surgical materials offer enough rigidity to support the sternum and can be flexible enough to allow physiological breathing movements of the chest; however, these materials fall short when managing patients with extensive bone loss, osteopenia, or general poor bone quality, for such cases, flaps offer a better closure system. Early utilization of flaps yields better survival rates compared to delayed closure or to patients treated with sternal rewiring and closed drainage. The utilization of pectoralis major flaps, rectus abdominus, and latissimus muscle flaps have all been described in the literature as great alternatives. Flap selection depends on a variety of factors, mainly the size of the sternal defect, infection, and the availability of local tissues. Free fibular flaps are commonly harvested flaps utilized in reconstruction around the body. In cases regarding sternal reconstruction with free fibular flaps, the literature exclusively discussed the flap applied vertically to the chest wall. We present a different technique applying the free fibular triple barrel flap oriented in a transverse manner, in parallel to the ribs. In our experience, this method could have enhanced results and improved prognosis as it contributes to the normal circumferential shape of the chest wall.Keywords: sternal dehiscence, management, free fibular flaps, novel surgical techniques
Procedia PDF Downloads 941402 Human Development as an Integral Part of Human Security within the Responsibility to Rebuild
Authors: Themistoklis Tzimas
Abstract:
The proposed paper focuses on a triangular relationship, between human security, human development and responsibility to rebuild. This relationship constitutes the innovative contribution to the debate about human security. Human security constitutes a generic and legally binding notion, which orientates from an integrated approach the UN Charter principles and of the collective security system. Such an approach brings at the forefront of international law and of international relations not only states but non- state actors as well. Several doctrines attempt to implement the fore-mentioned approach among which the Responsibility to Protect- hereinafter R2P- doctrine and its aspect of Responsibility to Rebuild- hereinafter R2R. In this sense, R2P in general and R2R are supposed to be guided by human security imperatives. Human security because of its human- centered approach encompasses as an integral part of it, human development. Human development constitutes part of the backbone of human security, since it deals with the social and economic root- causes of the threats, which human security attempts to confront. In this sense, doctrines which orientate from human security, such as R2P and its R2R aspect should also take into account human development imperatives, in order to improve their efficiency. On the contrary though, R2R is more often linked with market- orientated policies, which are often imposed under transitional authorities, regardless of local needs. The implementation of such policies can be identified as a cause for striking failures in the framework of R2R. In addition it is a misinterpretation of the essence of human security and subsequently of R2P as well. The findings of the article, on the basis of the fore-mentioned argument is that a change must take place from a market- orientated misinterpretation of R2R to an approach attempting to implement human development doctrines, since the latter lie at the heart of human security and can be proven more effective in dealing with the root- causes of conflicts. Methodologically, the article begins with an examination of human security and of its binding nature on the basis of its orientation from the UN Charter. It also examines its significance in the framework of the collective security system. Then, follows the analysis of why and how human development constitutes an integral part of human security. At the next part it is proven that R2P in general and R2R more specifically constitute or should constitute an attempt to implement human security doctrines within the collective security system. Having built this triangular relationship it is argued that human development is proven to be the most suitable notion, so that the spirit of human security and the scopes of R2P are successfully implemented.Keywords: human security, un charter, responsibility to protect, responsibility to rebuild, human development
Procedia PDF Downloads 2801401 Transferable Knowledge: Expressing Lessons Learnt from Failure to Outsiders
Authors: Stijn Horck
Abstract:
Background: The value of lessons learned from failure increases when these insights can be put to use by those who did not experience the failure. While learning from others has mostly been researched between individuals or teams within the same environment, transferring knowledge from the person who experienced the failure to an outsider comes with extra challenges. As sense-making of failure is an individual process leading to different learning experiences, the potential of lessons learned from failure is highly variable depending on who is transferring the lessons learned. Using an integrated framework of linguistic aspects related to attributional egotism, this study aims to offer a complete explanation of the challenges in transferring lessons learned from failures that are experienced by others. Method: A case study of a failed foundation established to address the information needs for GPs in times of COVID-19 has been used. An overview of failure causes and lessons learned were made through a preliminary analysis of data collected in two phases with metaphoric examples of failure types. This was followed up by individual narrative interviews with the board members who have all experienced the same events to analyse the individual variance of lessons learned through discourse analysis. This research design uses the researcher-as-instrument approach since the recipient of these lessons learned is the author himself. Results: Thirteen causes were given why the foundation has failed, and nine lessons were formulated. Based on the individually emphasized events, the explanation of the failure events mentioned by all or three respondents consisted of more linguistic aspects related to attributional egotism than failure events mentioned by only one or two. Moreover, the learning events mentioned by all or three respondents involved lessons learned that are based on changed insight, while the lessons expressed by only one or two are more based on direct value. Retrospectively, the lessons expressed as a group in the first data collection phase seem to have captured some but not all of the direct value lessons. Conclusion: Individual variance in expressing lessons learned to outsiders can be reduced using metaphoric or analogical explanations from a third party. In line with the attributional egotism theory, individuals separated from a group that has experienced the same failure are more likely to refer to failure causes of which the chances to be contradicted are the smallest. Lastly, this study contributes to the academic literature by demonstrating that the use of linguistic analysis is suitable for investigating the knowledge transfer from lessons learned after failure.Keywords: failure, discourse analysis, knowledge transfer, attributional egotism
Procedia PDF Downloads 1151400 Deep-Learning Based Approach to Facial Emotion Recognition through Convolutional Neural Network
Authors: Nouha Khediri, Mohammed Ben Ammar, Monji Kherallah
Abstract:
Recently, facial emotion recognition (FER) has become increasingly essential to understand the state of the human mind. Accurately classifying emotion from the face is a challenging task. In this paper, we present a facial emotion recognition approach named CV-FER, benefiting from deep learning, especially CNN and VGG16. First, the data is pre-processed with data cleaning and data rotation. Then, we augment the data and proceed to our FER model, which contains five convolutions layers and five pooling layers. Finally, a softmax classifier is used in the output layer to recognize emotions. Based on the above contents, this paper reviews the works of facial emotion recognition based on deep learning. Experiments show that our model outperforms the other methods using the same FER2013 database and yields a recognition rate of 92%. We also put forward some suggestions for future work.Keywords: CNN, deep-learning, facial emotion recognition, machine learning
Procedia PDF Downloads 951399 Optimization of the Control Scheme for Human Extremity Exoskeleton
Authors: Yang Li, Xiaorong Guan, Cheng Xu
Abstract:
In order to design a suitable control scheme for human extremity exoskeleton, the interaction force control scheme with traditional PI controller was presented, and the simulation study of the electromechanical system of the human extremity exoskeleton was carried out by using a MATLAB/Simulink module. By analyzing the simulation calculation results, it was shown that the traditional PI controller is not very suitable for every movement speed of human body. So, at last the fuzzy self-adaptive PI controller was presented to solve this problem. Eventually, the superiority and feasibility of the fuzzy self-adaptive PI controller was proved by the simulation results and experimental results.Keywords: human extremity exoskeleton, interaction force control scheme, simulation study, fuzzy self-adaptive pi controller, man-machine coordinated walking, bear payload
Procedia PDF Downloads 3621398 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 1271397 Investigation of the Cooling and Uniformity Effectiveness in a Sinter Packed Bed
Authors: Uzu-Kuei Hsu, Chang-Hsien Tai, Kai-Wun Jin
Abstract:
When sinters are filled into the cooler from the sintering machine, and the non-uniform distribution of the sinters leads to uneven cooling. This causes the temperature difference of the sinters leaving the cooler to be so large that it results in the conveyors being deformed by the heat. The present work applies CFD method to investigate the thermo flowfield phenomena in a sinter cooler by the Porous Media Model. Using the obtained experimental data to simulate porosity (Ε), permeability (κ), inertial coefficient (F), specific heat (Cp) and effective thermal conductivity (keff) of the sinter packed beds. The physical model is a similar geometry whose Darcy numbers (Da) are similar to the sinter cooler. Using the Cooling Index (CI) and Uniformity Index (UI) to analyze the thermo flowfield in the sinter packed bed obtains the cooling performance of the sinter cooler.Keywords: porous media, sinter, cooling index (CI), uniformity index (UI), CFD
Procedia PDF Downloads 4021396 Dynamic Reliability for a Complex System and Process: Application on Offshore Platform in Mozambique
Authors: Raed KOUTA, José-Alcebiades-Ernesto HLUNGUANE, Eric Châtele
Abstract:
The search for and exploitation of new fossil energy resources is taking place in the context of the gradual depletion of existing deposits. Despite the adoption of international targets to combat global warming, the demand for fuels continues to grow, contradicting the movement towards an energy-efficient society. The increase in the share of offshore in global hydrocarbon production tends to compensate for the depletion of terrestrial reserves, thus constituting a major challenge for the players in the sector. Through the economic potential it represents, and the energy independence it provides, offshore exploitation is also a challenge for States such as Mozambique, which have large maritime areas and whose environmental wealth must be considered. The exploitation of new reserves on economically viable terms depends on available technologies. The development of deep and ultra-deep offshore requires significant research and development efforts. Progress has also been made in managing the multiple risks inherent in this activity. Our study proposes a reliability approach to develop products and processes designed to live at sea. Indeed, the context of an offshore platform requires highly reliable solutions to overcome the difficulties of access to the system for regular maintenance and quick repairs and which must resist deterioration and degradation processes. One of the characteristics of failures that we consider is the actual conditions of use that are considered 'extreme.' These conditions depend on time and the interactions between the different causes. These are the two factors that give the degradation process its dynamic character, hence the need to develop dynamic reliability models. Our work highlights mathematical models that can explicitly manage interactions between components and process variables. These models are accompanied by numerical resolution methods that help to structure a dynamic reliability approach in a physical and probabilistic context. The application developed makes it possible to evaluate the reliability, availability, and maintainability of a floating storage and unloading platform for liquefied natural gas production.Keywords: dynamic reliability, offshore plateform, stochastic process, uncertainties
Procedia PDF Downloads 1201395 An Improvement Study for Mattress Manufacturing Line with a Simulation Model
Authors: Murat Sarı, Emin Gundogar, Mumtaz Ipek
Abstract:
Nowadays, in a furniture sector, competition of market share (portion) and production variety and changeability enforce the firm to reengineer operations on manufacturing line to increase the productivity. In this study, spring mattress manufacturing line of the furniture manufacturing firm is analyzed analytically. It’s intended to search and find the bottlenecks of production to balance the semi-finished material flow. There are four base points required to investigate in bottleneck elimination process. These are bottlenecks of Method, Material, Machine and Man (work force) resources, respectively. Mentioned bottlenecks are investigated and varied scenarios are created for recruitment of manufacturing system. Probable near optimal alternatives are determined by system models built in Arena simulation software.Keywords: bottleneck search, buffer stock, furniture sector, simulation
Procedia PDF Downloads 3591394 An Approach Based on Statistics and Multi-Resolution Representation to Classify Mammograms
Authors: Nebi Gedik
Abstract:
One of the significant and continual public health problems in the world is breast cancer. Early detection is very important to fight the disease, and mammography has been one of the most common and reliable methods to detect the disease in the early stages. However, it is a difficult task, and computer-aided diagnosis (CAD) systems are needed to assist radiologists in providing both accurate and uniform evaluation for mass in mammograms. In this study, a multiresolution statistical method to classify mammograms as normal and abnormal in digitized mammograms is used to construct a CAD system. The mammogram images are represented by wave atom transform, and this representation is made by certain groups of coefficients, independently. The CAD system is designed by calculating some statistical features using each group of coefficients. The classification is performed by using support vector machine (SVM).Keywords: wave atom transform, statistical features, multi-resolution representation, mammogram
Procedia PDF Downloads 2221393 Using Discrete Event Simulation Approach to Reduce Waiting Times in Computed Tomography Radiology Department
Authors: Mwafak Shakoor
Abstract:
The purpose of this study was to reduce patient waiting times, improve system throughput and improve resources utilization in radiology department. A discrete event simulation model was developed using Arena simulation software to investigate different alternatives to improve the overall system delivery based on adding resource scenarios due to the linkage between patient waiting times and resource availability. The study revealed that there is no addition investment need to procure additional scanner but hospital management deploy managerial tactics to enhance machine utilization and reduce the long waiting time in the department.Keywords: discrete event simulation, radiology department, arena, waiting time, healthcare modeling, computed tomography
Procedia PDF Downloads 5921392 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya
Authors: Abdel Rahman Khider Hassan
Abstract:
This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha
Procedia PDF Downloads 2061391 Multi-Label Approach to Facilitate Test Automation Based on Historical Data
Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally
Abstract:
The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.Keywords: machine learning, multi-class, multi-label, supervised learning, test automation
Procedia PDF Downloads 1321390 A Simulation-Based Study of Dust Ingression into Microphone of Indoor Consumer Electronic Devices
Authors: Zhichao Song, Swanand Vaidya
Abstract:
Nowadays, most portable (e.g., smartphones) and wearable (e.g., smartwatches and earphones) consumer hardware are designed to be dustproof following IP5 or IP6 ratings to ensure the product is able to handle potentially dusty outdoor environments. On the other hand, the design guideline is relatively vague for indoor devices (e.g., smart displays and speakers). While it is generally believed that the indoor environment is much less dusty, in certain circumstances, dust ingression is still able to cause functional failures, such as microphone frequency response shift and camera black spot, or cosmetic dissatisfaction, mainly the dust build up in visible pockets and gaps which is hard to clean. In this paper, we developed a simulation methodology to analyze dust settlement and ingression into known ports of a device. A closed system is initialized with dust particles whose sizes follow Weibull distribution based on data collected in a user study, and dust particle movement was approximated as a settlement in stationary fluid, which is governed by Stokes’ law. Following this method, we simulated dust ingression into MEMS microphone through the acoustic port and protective mesh. Various design and environmental parameters are evaluated including mesh pore size, acoustic port depth-to-diameter ratio, mass density of dust material and inclined angle of microphone port. Although the dependencies of dust resistance on these parameters are all monotonic, smaller mesh pore size, larger acoustic depth-to-opening ratio and more inclined microphone placement (towards horizontal direction) are preferred for dust resistance; these preferences may represent certain trade-offs in audio performance and compromise in industrial design. The simulation results suggest the quantitative ranges of these parameters, with more pronounced effects in the improvement of dust resistance. Based on the simulation results, we proposed several design guidelines that intend to achieve an overall balanced design from audio performance, dust resistance, and flexibility in industrial design.Keywords: dust settlement, numerical simulation, microphone design, Weibull distribution, Stoke's equation
Procedia PDF Downloads 1071389 Operator Efficiency Study for Assembly Line Optimization at Semiconductor Assembly and Test
Authors: Rohana Abdullah, Md Nizam Abd Rahman, Seri Rahayu Kamat
Abstract:
Operator efficiency aspect is gaining importance in ensuring optimized usage of resources especially in the semi-automated manufacturing environment. This paper addresses a case study done to solve operator efficiency and line balancing issue at a semiconductor assembly and test manufacturing. A Man-to-Machine (M2M) work study technique is used to study operator current utilization and determine the optimum allocation of the operators to the machines. Critical factors such as operator activity, activity frequency and operator competency level are considered to gain insight on the parameters that affects the operator utilization. Equipment standard time and overall equipment efficiency (OEE) information are also gathered and analyzed to achieve a balanced and optimized production.Keywords: operator efficiency, optimized production, line balancing, industrial and manufacturing engineering
Procedia PDF Downloads 7291388 Power Control of DFIG in WECS Using Backstipping and Sliding Mode Controller
Authors: Abdellah Boualouch, Ahmed Essadki, Tamou Nasser, Ali Boukhriss, Abdellatif Frigui
Abstract:
This paper presents a power control for a Doubly Fed Induction Generator (DFIG) using in Wind Energy Conversion System (WECS) connected to the grid. The proposed control strategy employs two nonlinear controllers, Backstipping (BSC) and sliding-mode controller (SMC) scheme to directly calculate the required rotor control voltage so as to eliminate the instantaneous errors of active and reactive powers. In this paper the advantages of BSC and SMC are presented, the performance and robustness of this two controller’s strategy are compared between them. First, we present a model of wind turbine and DFIG machine, then a synthesis of the controllers and their application in the DFIG power control. Simulation results on a 1.5MW grid-connected DFIG system are provided by MATLAB/Simulink.Keywords: backstipping, DFIG, power control, sliding-mode, WESC
Procedia PDF Downloads 5941387 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids
Authors: Ayalew Yimam Ali
Abstract:
The Y-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the Y-junction microchannel can be a difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the Y-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the Y-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement
Procedia PDF Downloads 211386 Online Authenticity Verification of a Biometric Signature Using Dynamic Time Warping Method and Neural Networks
Authors: Gałka Aleksandra, Jelińska Justyna, Masiak Albert, Walentukiewicz Krzysztof
Abstract:
An offline signature is well-known however not the safest way to verify identity. Nowadays, to ensure proper authentication, i.e. in banking systems, multimodal verification is more widely used. In this paper the online signature analysis based on dynamic time warping (DTW) coupled with machine learning approaches has been presented. In our research signatures made with biometric pens were gathered. Signature features as well as their forgeries have been described. For verification of authenticity various methods were used including convolutional neural networks using DTW matrix and multilayer perceptron using sums of DTW matrix paths. System efficiency has been evaluated on signatures and signature forgeries collected on the same day. Results are presented and discussed in this paper.Keywords: dynamic time warping, handwritten signature verification, feature-based recognition, online signature
Procedia PDF Downloads 1751385 BART Matching Method: Using Bayesian Additive Regression Tree for Data Matching
Authors: Gianna Zou
Abstract:
Propensity score matching (PSM), introduced by Paul R. Rosenbaum and Donald Rubin in 1983, is a popular statistical matching technique which tries to estimate the treatment effects by taking into account covariates that could impact the efficacy of study medication in clinical trials. PSM can be used to reduce the bias due to confounding variables. However, PSM assumes that the response values are normally distributed. In some cases, this assumption may not be held. In this paper, a machine learning method - Bayesian Additive Regression Tree (BART), is used as a more robust method of matching. BART can work well when models are misspecified since it can be used to model heterogeneous treatment effects. Moreover, it has the capability to handle non-linear main effects and multiway interactions. In this research, a BART Matching Method (BMM) is proposed to provide a more reliable matching method over PSM. By comparing the analysis results from PSM and BMM, BMM can perform well and has better prediction capability when the response values are not normally distributed.Keywords: BART, Bayesian, matching, regression
Procedia PDF Downloads 147