Search results for: XR-assisted operator
136 Resilient Manufacturing in Times of Mass Customisation: Using Augmented Reality to Improve Training and Operating Practices of EV’s Battery Assembly
Authors: Lorena Caires Moreira, Marcos Kauffman
Abstract:
This paper outlines the results of experimental research on deploying an emerging augmented reality (AR) system for real-time task assistance of highly customized and high-risk manual operations. The focus is on operators’ training capabilities and the aim is to test if such technologies can support achieving higher levels of knowledge retention and accuracy of task execution to improve health and safety (H and S) levels. The proposed solution is tested and validated using a real-world case study of electric vehicles’ battery module assembly. The experimental results revealed that the proposed AR method improved the training practices by increasing the knowledge retention levels from 40% to 84% and improved the accuracy of task execution from 20% to 71%, compared to the traditional paper-based method. The results of this research can be used as a demonstration of how emerging technologies are advancing the choice of manual, hybrid, or fully automated processes by promoting the connected worker (Industry 5.0) and supporting manufacturing in becoming more resilient in times of constant market changes.Keywords: augmented reality, extended reality, connected worker, XR-assisted operator, manual assembly, industry 5.0, smart training, battery assembly
Procedia PDF Downloads 128135 A Contemporary Gender Predominance: A Honduran Textile Manufacturing Diagnose
Authors: Jesús David Argueta Moreno, Taria Ruiz, Cesar Ortega
Abstract:
This qualitative investigation represents the first stage of the human capital engineering analysis, along the small and medium textile manufacturing companies, located on the city of Tegucigalpa, Honduras where the symptoms of the local manufacturing industry´s describe a severe gender displacement phenomenon. The evaluation of this phenomena, intends to trigger the Honduran small and medium technology manufactures into a collective performance, analysis through the development of a sectorial diagnose and the creation of a manufacturers guide, personalized. In accordance to the Honduran textile manufacturing needs, in order to strengthen their personnel capacities and thereby smoothen the gender equilibrium on this particular sector. It is worth mentioning, that on the last decade, the female gender has gathered positive statistics upon Central American job market´s, were the local business landscape describes a significant displacement of the Honduran female operators over the male gender workers that has significantly diminished their employment predominance. On the other hand, this study aims to evaluate the main features that impact on the job market local gender supplanting. On the other hand, this document aims to holistically describe the Honduran manufacturing context, as well as the current textile operator qualifications, in order to infer over the most proper human resources enforcement approaches/techniques on the industry.Keywords: gender predominance, manufacturing, higher education institutions, emerging trends
Procedia PDF Downloads 430134 Evaluation of Hand Arm Vibrations of Low Profile Dump Truck Operators in an Underground Metal Mine According to Job Component Analysis of a Work Cycle
Authors: Sridhar S, Govinda Raj Mandela, Aruna Mangalpady
Abstract:
In the present day scenario, Indian underground mines are moving towards full scale mechanisation for improvement of production and productivity levels. These mines are employing a wide variety of earth moving machines for the transportation of ore and overburden (waste). Low Profile Dump Trucks (LPDTs) have proven more advantageous towards improvement of production levels in underground mines through quick transportation. During the operation of LPDT, different kinds of vibrations are generated which can affect the health condition of the operator. Keeping this in view, the present research work focuses on measurement and evaluation of Hand Arm Vibrations (HAVs) from the steering system of LPDTs. The study also aims to evaluate the HAVs of different job components of a work cycle in operating LPDTs. The HAVs were measured and evaluated according to ISO 5349-2: 2001 standards, and the daily vibration exposures A(8) were calculated. The evaluated A(8) results show that LPDTs of 60 and 50 tons capacity have vibration levels more than that of the Exposure Action Value (EAV) of 2.5 m/s2 in every job component of the work cycle. Further, the results show that the vibration levels were more during empty haulage especially during descending journey when compared to other job components in all LPDTs considered for the study.Keywords: low profile dump trucks, hand arm vibrations, exposure action value, underground mines
Procedia PDF Downloads 131133 Development of Ecofriendly Ionic Liquid Modified Reverse Phase Liquid Chromatography Method for Simultaneous Determination of Anti-Hyperlipidemic Drugs
Authors: Hassan M. Albishri, Fatimah Al-Shehri, Deia Abd El-Hady
Abstract:
Among the analytical techniques, reverse phase liquid chromatography (RPLC) is currently used in pharmaceutical industry. Ecofriendly analytical chemistry offers the advantages of decreasing the environmental impact with the advantage of increasing operator safety which constituted a topic of industrial interest. Recently, ionic liquids have been successfully used to reduce or eliminate the conventional organic toxic solvents. In the current work, a simple and ecofriendly ionic liquid modified RPLC (IL-RPLC) method has been firstly developed and compared with RPLC under acidic and neutral mobile phase conditions for simultaneous determination of atorvastatin-calcium, rosuvastatin and simvastatin. Several chromatographic effective parameters have been changed in a systematic way. Adequate results have been achieved by mixing ILs with ethanol as a mobile phase under neutral conditions at 1 mL/min flow rate on C18 column. The developed IL-RPLC method has been validated for the quantitative determination of drugs in pharmaceutical formulations. The method showed excellent linearity for analytes in a wide range of concentrations with acceptable precise and accurate data. The current IL-RPLC technique could have vast applications particularly under neutral conditions for simple and greener (bio)analytical applications of pharmaceuticals.Keywords: ionic liquid, RPLC, anti-hyperlipidemic drugs, ecofriendly
Procedia PDF Downloads 256132 A Combined Approach Based on Artificial Intelligence and Computer Vision for Qualitative Grading of Rice Grains
Authors: Hemad Zareiforoush, Saeed Minaei, Ahmad Banakar, Mohammad Reza Alizadeh
Abstract:
The quality inspection of rice (Oryza sativa L.) during its various processing stages is very important. In this research, an artificial intelligence-based model coupled with computer vision techniques was developed as a decision support system for qualitative grading of rice grains. For conducting the experiments, first, 25 samples of rice grains with different levels of percentage of broken kernels (PBK) and degree of milling (DOM) were prepared and their qualitative grade was assessed by experienced experts. Then, the quality parameters of the same samples examined by experts were determined using a machine vision system. A grading model was developed based on fuzzy logic theory in MATLAB software for making a relationship between the qualitative characteristics of the product and its quality. Totally, 25 rules were used for qualitative grading based on AND operator and Mamdani inference system. The fuzzy inference system was consisted of two input linguistic variables namely, DOM and PBK, which were obtained by the machine vision system, and one output variable (quality of the product). The model output was finally defuzzified using Center of Maximum (COM) method. In order to evaluate the developed model, the output of the fuzzy system was compared with experts’ assessments. It was revealed that the developed model can estimate the qualitative grade of the product with an accuracy of 95.74%.Keywords: machine vision, fuzzy logic, rice, quality
Procedia PDF Downloads 419131 Development of a Cost Effective Two Wheel Tractor Mounted Mobile Maize Sheller for Small Farmers in Bangladesh
Authors: M. Israil Hossain, T. P. Tiwari, Ashrafuzzaman Gulandaz, Nusrat Jahan
Abstract:
Two-wheel tractor (power tiller) is a common tillage tool in Bangladesh agriculture for easy access in fragmented land with affordable price of small farmers. Traditional maize sheller needs to be carried from place to place by hooking with two-wheel tractor (2WT) and set up again for shelling operation which takes longer time for preparation of maize shelling. The mobile maize sheller eliminates the transportation problem and can start shelling operation instantly any place as it is attached together with 2WT. It is counterclockwise rotating cylinder, axial flow type sheller, and grain separated with a frictional force between spike tooth and concave. The maize sheller is attached with nuts and bolts in front of the engine base of 2WT. The operating power of the sheller comes from the fly wheel of the engine of the tractor through ‘V” belt pulley arrangement. The average shelling capacity of the mobile sheller is 2.0 t/hr, broken kernel 2.2%, and shelling efficiency 97%. The average maize shelling cost is Tk. 0.22/kg and traditional custom hire rate is Tk.1.0/kg, respectively (1 US$=Tk.78.0). The service provider of the 2WT can transport the mobile maize sheller long distance in operator’s seating position. The manufacturers started the fabrication of mobile maize sheller. This mobile maize sheller is also compatible for the other countries where 2WT is available for farming operation.Keywords: cost effective, mobile maize sheller, maize shelling capacity, small farmers, two wheel tractor
Procedia PDF Downloads 184130 An Improved Discrete Version of Teaching–Learning-Based Optimization for Supply Chain Network Design
Authors: Ehsan Yadegari
Abstract:
While there are several metaheuristics and exact approaches to solving the Supply Chain Network Design (SCND) problem, there still remains an unfilled gap in using the Teaching-Learning-Based Optimization (TLBO) algorithm. The algorithm has demonstrated desirable results with problems with complicated combinational optimization. The present study introduces a Discrete Self-Study TLBO (DSS-TLBO) with priority-based solution representation that can solve a supply chain network configuration model to lower the total expenses of establishing facilities and the flow of materials. The network features four layers, namely suppliers, plants, distribution centers (DCs), and customer zones. It is designed to meet the customer’s demand through transporting the material between layers of network and providing facilities in the best economic Potential locations. To have a higher quality of the solution and increase the speed of TLBO, a distinct operator was introduced that ensures self-adaptation (self-study) in the algorithm based on the four types of local search. In addition, while TLBO is used in continuous solution representation and priority-based solution representation is discrete, a few modifications were added to the algorithm to remove the solutions that are infeasible. As shown by the results of experiments, the superiority of DSS-TLBO compared to pure TLBO, genetic algorithm (GA) and firefly Algorithm (FA) was established.Keywords: supply chain network design, teaching–learning-based optimization, improved metaheuristics, discrete solution representation
Procedia PDF Downloads 52129 State Estimator Performance Enhancement: Methods for Identifying Errors in Modelling and Telemetry
Authors: M. Ananthakrishnan, Sunil K Patil, Koti Naveen, Inuganti Hemanth Kumar
Abstract:
State estimation output of EMS forms the base case for all other advanced applications used in real time by a power system operator. Ensuring tuning of state estimator is a repeated process and cannot be left once a good solution is obtained. This paper attempts to demonstrate methods to improve state estimator solution by identifying incorrect modelling and telemetry inputs to the application. In this work, identification of database topology modelling error by plotting static network using node-to-node connection details is demonstrated with examples. Analytical methods to identify wrong transmission parameters, incorrect limits and mistakes in pseudo load and generator modelling are explained with various cases observed. Further, methods used for active and reactive power tuning using bus summation display, reactive power absorption summary, and transformer tap correction are also described. In a large power system, verifying all network static data and modelling parameter on regular basis is difficult .The proposed tuning methods can be easily used by operators to quickly identify errors to obtain the best possible state estimation performance. This, in turn, can lead to improved decision-support capabilities, ultimately enhancing the safety and reliability of the power grid.Keywords: active power tuning, database modelling, reactive power, state estimator
Procedia PDF Downloads 7128 Electroencephalography (EEG) Analysis of Alcoholic and Control Subjects Using Multiscale Permutation Entropy
Authors: Lal Hussain, Wajid Aziz, Sajjad Ahmed Nadeem, Saeed Arif Shah, Abdul Majid
Abstract:
Brain electrical activity as reflected in Electroencephalography (EEG) have been analyzed and diagnosed using various techniques. Among them, complexity measure, nonlinearity, disorder, and unpredictability play vital role due to the nonlinear interconnection between functional and anatomical subsystem emerged in brain in healthy state and during various diseases. There are many social and economical issues of alcoholic abuse as memory weakness, decision making, impairments, and concentrations etc. Alcoholism not only defect the brains but also associated with emotional, behavior, and cognitive impairments damaging the white and gray brain matters. A recently developed signal analysis method i.e. Multiscale Permutation Entropy (MPE) is proposed to estimate the complexity of long-range temporal correlation time series EEG of Alcoholic and Control subjects acquired from University of California Machine Learning repository and results are compared with MSE. Using MPE, coarsed grained series is first generated and the PE is computed for each coarsed grained time series against the electrodes O1, O2, C3, C4, F2, F3, F4, F7, F8, Fp1, Fp2, P3, P4, T7, and T8. The results computed against each electrode using MPE gives higher significant values as compared to MSE as well as mean rank differences accordingly. Likewise, ROC and Area under the ROC also gives higher separation against each electrode using MPE in comparison to MSE.Keywords: electroencephalogram (EEG), multiscale permutation entropy (MPE), multiscale sample entropy (MSE), permutation entropy (PE), mann whitney test (MMT), receiver operator curve (ROC), complexity measure
Procedia PDF Downloads 495127 Optimization of Topology-Aware Job Allocation on a High-Performance Computing Cluster by Neural Simulated Annealing
Authors: Zekang Lan, Yan Xu, Yingkun Huang, Dian Huang, Shengzhong Feng
Abstract:
Jobs on high-performance computing (HPC) clusters can suffer significant performance degradation due to inter-job network interference. Topology-aware job allocation problem (TJAP) is such a problem that decides how to dedicate nodes to specific applications to mitigate inter-job network interference. In this paper, we study the window-based TJAP on a fat-tree network aiming at minimizing the cost of communication hop, a defined inter-job interference metric. The window-based approach for scheduling repeats periodically, taking the jobs in the queue and solving an assignment problem that maps jobs to the available nodes. Two special allocation strategies are considered, i.e., static continuity assignment strategy (SCAS) and dynamic continuity assignment strategy (DCAS). For the SCAS, a 0-1 integer programming is developed. For the DCAS, an approach called neural simulated algorithm (NSA), which is an extension to simulated algorithm (SA) that learns a repair operator and employs them in a guided heuristic search, is proposed. The efficacy of NSA is demonstrated with a computational study against SA and SCIP. The results of numerical experiments indicate that both the model and algorithm proposed in this paper are effective.Keywords: high-performance computing, job allocation, neural simulated annealing, topology-aware
Procedia PDF Downloads 117126 Multi-Objective Optimization in Carbon Abatement Technology Cycles (CAT) and Related Areas: Survey, Developments and Prospects
Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele
Abstract:
An infinitesimal increase in performance can have immense reduction in operating and capital expenses in a power generation system. Therefore, constant studies are being carried out to improve both conventional and novel power cycles. Globally, power producers are constantly researching on ways to minimize emission and to collectively downsize the total cost rate of power plants. A substantial spurt of developmental technologies of low carbon cycles have been suggested and studied, however they all have their limitations and financial implication. In the area of carbon abatement in power plants, three major objectives conflict: The cost rate of the plant, Power output and Environmental impact. Since, an increase in one of this parameter directly affects the other. This poses a multi-objective problem. It is paramount to be able to discern the point where improving one objective affects the other. Hence, the need for a Pareto-based optimization algorithm. Pareto-based optimization algorithm helps to find those points where improving one objective influences another objective negatively and stops there. The application of Pareto-based optimization algorithm helps the user/operator/designer make an informed decision. This paper sheds more light on areas that multi-objective optimization has been applied in carbon abatement technologies in the last five years, developments and prospects.Keywords: gas turbine, low carbon technology, pareto optimal, multi-objective optimization
Procedia PDF Downloads 791125 IoT Based Agriculture Monitoring Framework for Sustainable Rice Production
Authors: Armanul Hoque Shaon, Md Baizid Mahmud, Askander Nobi, Md. Raju Ahmed, Md. Jiabul Hoque
Abstract:
In the Internet of Things (IoT), devices are linked to the internet through a wireless network, allowing them to collect and transmit data without the need for a human operator. Agriculture relies heavily on wireless sensors, which are a vital component of the Internet of Things (IoT). This kind of wireless sensor network monitors physical or environmental variables like temperatures, sound, vibration, pressure, or motion without relying on a central location or sink and collaboratively passes its data across the network to be analyzed. As the primary source of plant nutrients, the soil is critical to the agricultural industry's continued growth. We're excited about the prospect of developing an Internet of Things (IoT) solution. To arrange the network, the sink node collects groundwater levels and sends them to the Gateway, which centralizes the data and forwards it to the sensor nodes. The sink node gathers soil moisture data, transmits the mean to the Gateways, and then forwards it to the website for dissemination. The web server is in charge of storing and presenting the moisture in the soil data to the web application's users. Soil characteristics may be collected using a networked method that we developed to improve rice production. Paddy land is running out as the population of our nation grows. The success of this project will be dependent on the appropriate use of the existing land base.Keywords: IoT based agriculture monitoring, intelligent irrigation, communicating network, rice production
Procedia PDF Downloads 154124 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data
Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer
Abstract:
This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML
Procedia PDF Downloads 129123 Evaluating and Reducing Aircraft Technical Delays and Cancellations Impact on Reliability Operational: Case Study of Airline Operator
Authors: Adel A. Ghobbar, Ahmad Bakkar
Abstract:
Although special care is given to maintenance, aircraft systems fail, and these failures cause delays and cancellations. The occurrence of Delays and Cancellations affects operators and manufacturers negatively. To reduce technical delays and cancellations, one should be able to determine the important systems causing them. The goal of this research is to find a method to define the most expensive delays and cancellations systems for Airline operators. A predictive model was introduced to forecast the failure and their impact after carrying out research that identifies relevant information to tackle the problems faced while answering the questions of this paper. Data were obtained from the manufacturers’ services reliability team database. Subsequently, delays and cancellations evaluation methods were identified. No cost estimation methods were used due to their complexity. The model was developed, and it takes into account the frequency of delays and cancellations and uses weighting factors to give an indication of the severity of their duration. The weighting factors are based on customer experience. The data Analysis approach has shown that delays and cancellations events are not seasonal and do not follow any specific trends. The use of weighting factor does have an influence on the shortlist over short periods (Monthly) but not the analyzed period of three years. Landing gear and the navigation system are among the top 3 factors causing delays and cancellations for all three aircraft types. The results did confirm that the cooperation between certain operators and manufacture reduce the impact of delays and cancellations.Keywords: reliability, availability, delays & cancellations, aircraft maintenance
Procedia PDF Downloads 132122 Comparison of Two Anesthetic Methods during Interventional Neuroradiology Procedure: Propofol versus Sevoflurane Using Patient State Index
Authors: Ki Hwa Lee, Eunsu Kang, Jae Hong Park
Abstract:
Background: Interventional neuroradiology (INR) has been a rapidly growing and evolving neurosurgical part during the past few decades. Sevoflurane and propofol are both suitable anesthetics for INR procedure. Monitoring of depth of anesthesia is being used very widely. SEDLine™ monitor, a 4-channel processed EEG monitor, uses a proprietary algorithm to analyze the raw EEG signal and displays the Patient State Index (PSI) values. There are only a fewer studies examining the PSI in the neuro-anesthesia. We aimed to investigate the difference of PSI values and hemodynamic variables between sevoflurane and propofol anesthesia during INR procedure. Methods: We reviewed the medical records of patients who scheduled to undergo embolization of non-ruptured intracranial aneurysm by a single operator from May 2013 to December 2014, retrospectively. Sixty-five patients were categorized into two groups; sevoflurane (n = 33) vs propofol (n = 32) group. The PSI values, hemodynamic variables, and the use of hemodynamic drugs were analyzed. Results: Significant differences were seen between PSI values obtained during different perioperative stages in both two groups (P < 0.0001). The PSI values of propofol group were lower than that of sevoflurane group during INR procedure (P < 0.01). The patients in propofol group had more prolonged time of extubation and more phenylephrine requirement than sevoflurane group (p < 0.05). Anti-hypertensive drug was more administered to the patients during extubation in sevoflurane group (p < 0.05). Conclusions: The PSI can detect depth of anesthesia and changes of concentration of anesthetics during INR procedure. Extubation was faster in sevoflurane group, but smooth recovery was shown in propofol group.Keywords: interventional neuroradiology, patient state index, propofol, sevoflurane
Procedia PDF Downloads 180121 General Architecture for Automation of Machine Learning Practices
Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain
Abstract:
Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler
Procedia PDF Downloads 57120 Development of a Few-View Computed Tomographic Reconstruction Algorithm Using Multi-Directional Total Variation
Authors: Chia Jui Hsieh, Jyh Cheng Chen, Chih Wei Kuo, Ruei Teng Wang, Woei Chyn Chu
Abstract:
Compressed sensing (CS) based computed tomographic (CT) reconstruction algorithm utilizes total variation (TV) to transform CT image into sparse domain and minimizes L1-norm of sparse image for reconstruction. Different from the traditional CS based reconstruction which only calculates x-coordinate and y-coordinate TV to transform CT images into sparse domain, we propose a multi-directional TV to transform tomographic image into sparse domain for low-dose reconstruction. Our method considers all possible directions of TV calculations around a pixel, so the sparse transform for CS based reconstruction is more accurate. In 2D CT reconstruction, we use eight-directional TV to transform CT image into sparse domain. Furthermore, we also use 26-directional TV for 3D reconstruction. This multi-directional sparse transform method makes CS based reconstruction algorithm more powerful to reduce noise and increase image quality. To validate and evaluate the performance of this multi-directional sparse transform method, we use both Shepp-Logan phantom and a head phantom as the targets for reconstruction with the corresponding simulated sparse projection data (angular sampling interval is 5 deg and 6 deg, respectively). From the results, the multi-directional TV method can reconstruct images with relatively less artifacts compared with traditional CS based reconstruction algorithm which only calculates x-coordinate and y-coordinate TV. We also choose RMSE, PSNR, UQI to be the parameters for quantitative analysis. From the results of quantitative analysis, no matter which parameter is calculated, the multi-directional TV method, which we proposed, is better.Keywords: compressed sensing (CS), low-dose CT reconstruction, total variation (TV), multi-directional gradient operator
Procedia PDF Downloads 256119 A Flute Tracking System for Monitoring the Wear of Cutting Tools in Milling Operations
Authors: Hatim Laalej, Salvador Sumohano-Verdeja, Thomas McLeay
Abstract:
Monitoring of tool wear in milling operations is essential for achieving the desired dimensional accuracy and surface finish of a machined workpiece. Although there are numerous statistical models and artificial intelligence techniques available for monitoring the wear of cutting tools, these techniques cannot pin point which cutting edge of the tool, or which insert in the case of indexable tooling, is worn or broken. Currently, the task of monitoring the wear on the tool cutting edges is carried out by the operator who performs a manual inspection, causing undesirable stoppages of machine tools and consequently resulting in costs incurred from lost productivity. The present study is concerned with the development of a flute tracking system to segment signals related to each physical flute of a cutter with three flutes used in an end milling operation. The purpose of the system is to monitor the cutting condition for individual flutes separately in order to determine their progressive wear rates and to predict imminent tool failure. The results of this study clearly show that signals associated with each flute can be effectively segmented using the proposed flute tracking system. Furthermore, the results illustrate that by segmenting the sensor signal by flutes it is possible to investigate the wear in each physical cutting edge of the cutting tool. These findings are significant in that they facilitate the online condition monitoring of a cutting tool for each specific flute without the need for operators/engineers to perform manual inspections of the tool.Keywords: machining, milling operation, tool condition monitoring, tool wear prediction
Procedia PDF Downloads 303118 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider
Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón
Abstract:
The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.Keywords: AD0, ALICE, DCS, LHC
Procedia PDF Downloads 305117 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels
Authors: Tal Remez, Or Litany, Alex Bronstein
Abstract:
The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.Keywords: binary pixels, maximum likelihood, neural networks, sparse coding
Procedia PDF Downloads 201116 An Operators’ Real-sense-based Fire Simulation for Human Factors Validation in Nuclear Power Plants
Authors: Sa-Kil Kim, Jang-Soo Lee
Abstract:
On March 31, 1993, a severe fire accident took place in a nuclear power plant located in Narora in North India. The event involved a major fire in the turbine building of NAPS unit-1 and resulted in a total loss of power to the unit for 17 hours. In addition, there was a heavy ingress of smoke in the control room, mainly through the intake of the ventilation system, forcing the operators to vacate the control room. The Narora fire accident provides us lessons indicating that operators could lose their mind and predictable behaviors during a fire. After the Fukushima accident, which resulted from a natural disaster, unanticipated external events are also required to be prepared and controlled for the ultimate safety of nuclear power plants. From last year, our research team has developed a test and evaluation facility that can simulate external events such as an earthquake and fire based on the operators’ real-sense. As one of the results of the project, we proposed a unit real-sense-based facility that can simulate fire events in a control room for utilizing a test-bed of human factor validation. The test-bed has the operator’s workstation shape and functions to simulate fire conditions such as smoke, heat, and auditory alarms in accordance with the prepared fire scenarios. Furthermore, the test-bed can be used for the operators’ training and experience.Keywords: human behavior in fire, human factors validation, nuclear power plants, real-sense-based fire simulation
Procedia PDF Downloads 283115 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter
Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri
Abstract:
Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion
Procedia PDF Downloads 696114 Improving Trainings of Mineral Processing Operators Through Gamification and Modelling and Simulation
Authors: Pedro A. S. Bergamo, Emilia S. Streng, Jan Rosenkranz, Yousef Ghorbani
Abstract:
Within the often-hazardous mineral industry, simulation training has speedily gained appreciation as an important method of increasing site safety and productivity through enhanced operator skill and knowledge. Performance calculations related to froth flotation, one of the most important concentration methods, is probably the hardest topic taught during the training of plant operators. Currently, most training teach those skills by traditional methods like slide presentations and hand-written exercises with a heavy focus on memorization. To optimize certain aspects of these pieces of training, we developed “MinFloat”, which teaches the operation formulas of the froth flotation process with the help of gamification. The simulation core based on a first-principles flotation model was implemented in Unity3D and an instructor tutoring system was developed, which presents didactic content and reviews the selected answers. The game was tested by 25 professionals with extensive experience in the mining industry based on a questionnaire formulated for training evaluations. According to their feedback, the game scored well in terms of quality, didactic efficacy and inspiring character. The feedback of the testers on the main target audience and the outlook of the mentioned solution is presented. This paper aims to provide technical background on the construction of educational games for the mining industry besides showing how feedback from experts can more efficiently be gathered thanks to new technologies such as online forms.Keywords: training evaluation, simulation based training, modelling, and simulation, froth flotation
Procedia PDF Downloads 113113 Bi-Criteria Vehicle Routing Problem for Possibility Environment
Authors: Bezhan Ghvaberidze
Abstract:
A multiple criteria optimization approach for the solution of the Fuzzy Vehicle Routing Problem (FVRP) is proposed. For the possibility environment the levels of movements between customers are calculated by the constructed simulation interactive algorithm. The first criterion of the bi-criteria optimization problem - minimization of the expectation of total fuzzy travel time on closed routes is constructed for the FVRP. A new, second criterion – maximization of feasibility of movement on the closed routes is constructed by the Choquet finite averaging operator. The FVRP is reduced to the bi-criteria partitioning problem for the so called “promising” routes which were selected from the all admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in the real-time computing. For the numerical solution of the bi-criteria partitioning problem the -constraint approach is used. An exact algorithm is implemented based on D. Knuth’s Dancing Links technique and the algorithm DLX. The Main objective was to present the new approach for FVRP, when there are some difficulties while moving on the roads. This approach is called FVRP for extreme conditions (FVRP-EC) on the roads. Also, the aim of this paper was to construct the solving model of the constructed FVRP. Results are illustrated on the numerical example where all Pareto-optimal solutions are found. Also, an approach for more complex model FVRP with time windows was developed. A numerical example is presented in which optimal routes are constructed for extreme conditions on the roads.Keywords: combinatorial optimization, Fuzzy Vehicle routing problem, multiple objective programming, possibility theory
Procedia PDF Downloads 485112 Diagnostic Accuracy of the Tuberculin Skin Test for Tuberculosis Diagnosis: Interest of Using ROC Curve and Fagan’s Nomogram
Authors: Nouira Mariem, Ben Rayana Hazem, Ennigrou Samir
Abstract:
Background and aim: During the past decade, the frequency of extrapulmonary forms of tuberculosis has increased. These forms are under-diagnosed using conventional tests. The aim of this study was to evaluate the performance of the Tuberculin Skin Test (TST) for the diagnosis of tuberculosis, using the ROC curve and Fagan’s Nomogram methodology. Methods: This was a case-control, multicenter study in 11 anti-tuberculosis centers in Tunisia, during the period from June to November2014. The cases were adults aged between 18 and 55 years with confirmed tuberculosis. Controls were free from tuberculosis. A data collection sheet was filled out and a TST was performed for each participant. Diagnostic accuracy measures of TST were estimated using ROC curve and Area Under Curve to estimate sensitivity and specificity of a determined cut-off point. Fagan’s nomogram was used to estimate its predictive values. Results: Overall, 1053 patients were enrolled, composed of 339 cases (sex-ratio (M/F)=0.87) and 714 controls (sex-ratio (M/F)=0.99). The mean age was 38.3±11.8 years for cases and 33.6±11 years for controls. The mean diameter of the TST induration was significantly higher among cases than controls (13.7mm vs.6.2mm;p=10-6). Area Under Curve was 0.789 [95% CI: 0.758-0.819; p=0.01], corresponding to a moderate discriminating power for this test. The most discriminative cut-off value of the TST, which were associated with the best sensitivity (73.7%) and specificity (76.6%) couple was about 11 mm with a Youden index of 0.503. Positive and Negative predictive values were 3.11% and 99.52%, respectively. Conclusion: In view of these results, we can conclude that the TST can be used for tuberculosis diagnosis with a good sensitivity and specificity. However, the skin induration measurement and its interpretation is operator dependent and remains difficult and subjective. The combination of the TST with another test such as the Quantiferon test would be a good alternative.Keywords: tuberculosis, tuberculin skin test, ROC curve, cut-off
Procedia PDF Downloads 67111 Normalized Enterprises Architectures: Portugal's Public Procurement System Application
Authors: Tiago Sampaio, André Vasconcelos, Bruno Fragoso
Abstract:
The Normalized Systems Theory, which is designed to be applied to software architectures, provides a set of theorems, elements and rules, with the purpose of enabling evolution in Information Systems, as well as ensuring that they are ready for change. In order to make that possible, this work’s solution is to apply the Normalized Systems Theory to the domain of enterprise architectures, using Archimate. This application is achieved through the adaptation of the elements of this theory, making them artifacts of the modeling language. The theorems are applied through the identification of the viewpoints to be used in the architectures, as well as the transformation of the theory’s encapsulation rules into architectural rules. This way, it is possible to create normalized enterprise architectures, thus fulfilling the needs and requirements of the business. This solution was demonstrated using the Portuguese Public Procurement System. The Portuguese government aims to make this system as fair as possible, allowing every organization to have the same business opportunities. The aim is for every economic operator to have access to all public tenders, which are published in any of the 6 existing platforms, independently of where they are registered. In order to make this possible, we applied our solution to the construction of two different architectures, which are able of fulfilling the requirements of the Portuguese government. One of those architectures, TO-BE A, has a Message Broker that performs the communication between the platforms. The other, TO-BE B, represents the scenario in which the platforms communicate with each other directly. Apart from these 2 architectures, we also represent the AS-IS architecture that demonstrates the current behavior of the Public Procurement Systems. Our evaluation is based on a comparison between the AS-IS and the TO-BE architectures, regarding the fulfillment of the rules and theorems of the Normalized Systems Theory and some quality metrics.Keywords: archimate, architecture, broker, enterprise, evolvable systems, interoperability, normalized architectures, normalized systems, normalized systems theory, platforms
Procedia PDF Downloads 357110 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach
Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma
Abstract:
Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX
Procedia PDF Downloads 130109 Predictive Analytics of Bike Sharing Rider Parameters
Authors: Bongs Lainjo
Abstract:
The evolution and escalation of bike-sharing programs (BSP) continue unabated. Since the sixties, many countries have introduced different models and strategies of BSP. These include variations ranging from dockless models to electronic real-time monitoring systems. Reasons for using this BSP include recreation, errands, work, etc. And there is all indication that complex, and more innovative rider-friendly systems are yet to be introduced. The objective of this paper is to analyze current variables established by different operators and streamline them identifying the most compelling ones using analytics. Given the contents of available databases, there is a lack of uniformity and common standard on what is required and what is not. Two factors appear to be common: user type (registered and unregistered, and duration of each trip). This article uses historical data provided by one operator based in the greater Washington, District of Columbia, USA area. Several variables including categorical and continuous data types were screened. Eight out of 18 were considered acceptable and significantly contribute to determining a useful and reliable predictive model. Bike-sharing systems have become popular in recent years all around the world. Although this trend has resulted in many studies on public cycling systems, there have been few previous studies on the factors influencing public bicycle travel behavior. A bike-sharing system is a computer-controlled system in which individuals can borrow bikes for a fee or free for a limited period. This study has identified unprecedented useful, and pragmatic parameters required in improving BSP ridership dynamics.Keywords: sharing program, historical data, parameters, ridership dynamics, trip duration
Procedia PDF Downloads 138108 Using Deep Learning for the Detection of Faulty RJ45 Connectors on a Radio Base Station
Authors: Djamel Fawzi Hadj Sadok, Marrone Silvério Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner
Abstract:
A radio base station (RBS), part of the radio access network, is a particular type of equipment that supports the connection between a wide range of cellular user devices and an operator network access infrastructure. Nowadays, most of the RBS maintenance is carried out manually, resulting in a time consuming and costly task. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. This paper proposes and compares two deep learning solutions to identify attached RJ45 connectors on network ports. We named connector detection, the solution based on object detection, and connector classification, the one based on object classification. With the connector detection, we get an accuracy of 0:934, mean average precision 0:903. Connector classification, get a maximum accuracy of 0:981 and an AUC of 0:989. Although connector detection was outperformed in this study, this should not be viewed as an overall result as connector detection is more flexible for scenarios where there is no precise information about the environment and the possible devices. At the same time, the connector classification requires that information to be well-defined.Keywords: radio base station, maintenance, classification, detection, deep learning, automation
Procedia PDF Downloads 201107 Generation of Renewable Energy Through Photovoltaic Panels, Albania Photovoltaic Capacity
Authors: Dylber Qema
Abstract:
Driven by recent developments in technology and the growing concern about the sustainability and environmental impact of conventional fuel use, the possibility of producing clean and sustainable energy in significant quantities from renewable energy sources has sparked interest all over the world. Solar energy is one of the sources for the generation of electricity, with no emissions or environmental pollution. The electricity produced by photovoltaics can supply a home or business and can even be sold or exchanged with the grid operator. A very positive effect of using photovoltaic modules is that they do not produce greenhouse gases and do not produce chemical waste, unlike all other forms of energy production. Photovoltaics are becoming one of the largest investments in the field of renewable generating units. Improving the reliability of the electric power system is one of the most important impacts of the installation of photovoltaics (PV). Renewable energy sources are so large that they can meet the energy demands of the whole world, thus enabling sustainable supply as well as reducing local and global atmospheric emissions. Albania is rated by experts as one of the most favorable countries in Europe for the production of electricity from solar panels. But the country currently produces about 1% of its energy from the sun, while the rest of the needs are met by hydropower plants and imports. Albania has very good characteristics in terms of solar radiation (about 1300–1400 kW/m2). Solar energy has great potential and is a permanent source of energy with greater economic efficiency. Photovoltaic energy is also seen as an alternative, as long periods of drought in Albania have produced crises and high costs for securing energy in the foreign market.Keywords: capacity, ministry of tourism and environment, obstacles, photovoltaic energy, sustainable
Procedia PDF Downloads 59