Search results for: switching time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8280

Search results for: switching time

6600 Data Integrity: Challenges in Health Information Systems in South Africa

Authors: T. Thulare, M. Herselman, A. Botha

Abstract:

Poor system use, including inappropriate design of health information systems, causes difficulties in communication with patients and increased time spent by healthcare professionals in recording the necessary health information for medical records. System features like pop-up reminders, complex menus, and poor user interfaces can make medical records far more time consuming than paper cards as well as affect decision-making processes. Although errors associated with health information and their real and likely effect on the quality of care and patient safety have been documented for many years, more research is needed to measure the occurrence of these errors and determine the causes to implement solutions. Therefore, the purpose of this paper is to identify data integrity challenges in hospital information systems through a scoping review and based on the results provide recommendations on how to manage these. Only 34 papers were found to be most suitable out of 297 publications initially identified in the field. The results indicated that human and computerized systems are the most common challenges associated with data integrity and factors such as policy, environment, health workforce, and lack of awareness attribute to these challenges but if measures are taken the data integrity challenges can be managed.

Keywords: data integrity, data integrity challenges, hospital information systems, South Africa

Procedia PDF Downloads 181
6599 A TFETI Domain Decompositon Solver for von Mises Elastoplasticity Model with Combination of Linear Isotropic-Kinematic Hardening

Authors: Martin Cermak, Stanislav Sysala

Abstract:

In this paper we present the efficient parallel implementation of elastoplastic problems based on the TFETI (Total Finite Element Tearing and Interconnecting) domain decomposition method. This approach allow us to use parallel solution and compute this nonlinear problem on the supercomputers and decrease the solution time and compute problems with millions of DOFs. In our approach we consider an associated elastoplastic model with the von Mises plastic criterion and the combination of linear isotropic-kinematic hardening law. This model is discretized by the implicit Euler method in time and by the finite element method in space. We consider the system of nonlinear equations with a strongly semismooth and strongly monotone operator. The semismooth Newton method is applied to solve this nonlinear system. Corresponding linearized problems arising in the Newton iterations are solved in parallel by the above mentioned TFETI. The implementation of this problem is realized in our in-house MatSol packages developed in MATLAB.

Keywords: isotropic-kinematic hardening, TFETI, domain decomposition, parallel solution

Procedia PDF Downloads 420
6598 Collaborative Governance in Dutch Flood Risk Management: An Historical Analysis

Authors: Emma Avoyan

Abstract:

The safety standards for flood protection in the Netherlands have been revised recently. It is expected that all major flood-protection structures will have to be reinforced to meet the new standards. The Dutch Flood Protection Programme aims at accomplishing this task through innovative integrated projects such as construction of multi-functional flood defenses. In these projects, flood safety purposes will be combined with spatial planning, nature development, emergency management or other sectoral objectives. Therefore, implementation of dike reinforcement projects requires early involvement and collaboration between public and private sectors, different governmental actors and agencies. The development and implementation of such integrated projects has been an issue in Dutch flood risk management since long. Therefore, this article analyses how cross-sector collaboration within flood risk governance in the Netherlands has evolved over time, and how this development can be explained. The integrative framework for collaborative governance is applied as an analytical tool to map external factors framing possibilities as well as constraints for cross-sector collaboration in Dutch flood risk domain. Supported by an extensive document and literature analysis, the paper offers insights on how the system context and different drivers changing over time either promoted or hindered cross-sector collaboration between flood protection sector, urban development, nature conservation or any other sector involved in flood risk governance. The system context refers to the multi-layered and interrelated suite of conditions that influence the formation and performance of complex governance systems, such as collaborative governance regimes, whereas the drivers initiate and enable the overall process of collaboration. In addition, by applying a method of process tracing we identify a causal and chronological chain of events shaping cross-sectoral interaction in Dutch flood risk management. Our results indicate that in order to evaluate the performance of complex governance systems, it is important to firstly study the system context that shapes it. Clear understanding of the system conditions and drivers for collaboration gives insight into the possibilities of and constraints for effective performance of complex governance systems. The performance of the governance system is affected by the system conditions, while at the same time the governance system can also change the system conditions. Our results show that the sequence of changes within the system conditions and drivers over time affect how cross-sector interaction in Dutch flood risk governance system happens now. Moreover, we have traced the potential of this governance system to shape and change the system context.

Keywords: collaborative governance, cross-sector interaction, flood risk management, the Netherlands

Procedia PDF Downloads 130
6597 WhatsApp as Part of a Blended Learning Model to Help Programming Novices

Authors: Tlou J. Ramabu

Abstract:

Programming is one of the challenging subjects in the field of computing. In the higher education sphere, some programming novices’ performance, retention rate, and success rate are not improving. Most of the time, the problem is caused by the slow pace of learning, difficulty in grasping the syntax of the programming language and poor logical skills. More importantly, programming forms part of major subjects within the field of computing. As a result, specialized pedagogical methods and innovation are highly recommended. Little research has been done on the potential productivity of the WhatsApp platform as part of a blended learning model. In this article, the authors discuss the WhatsApp group as a part of blended learning model incorporated for a group of programming novices. We discuss possible administrative activities for productive utilisation of the WhatsApp group on the blended learning overview. The aim is to take advantage of the popularity of WhatsApp and the time students spend on it for their educational purpose. We believe that blended learning featuring a WhatsApp group may ease novices’ cognitive load and strengthen their foundational programming knowledge and skills. This is a work in progress as the proposed blended learning model with WhatsApp incorporated is yet to be implemented.

Keywords: blended learning, higher education, WhatsApp, programming, novices, lecturers

Procedia PDF Downloads 172
6596 Optimization of Assay Parameters of L-Glutaminase from Bacillus cereus MTCC1305 Using Artificial Neural Network

Authors: P. Singh, R. M. Banik

Abstract:

Artificial neural network (ANN) was employed to optimize assay parameters viz., time, temperature, pH of reaction mixture, enzyme volume and substrate concentration of L-glutaminase from Bacillus cereus MTCC 1305. ANN model showed high value of coefficient of determination (0.9999), low value of root mean square error (0.6697) and low value of absolute average deviation. A multilayer perceptron neural network trained with an error back-propagation algorithm was incorporated for developing a predictive model and its topology was obtained as 5-3-1 after applying Levenberg Marquardt (LM) training algorithm. The predicted activity of L-glutaminase was obtained as 633.7349 U/l by considering optimum assay parameters, viz., pH of reaction mixture (7.5), reaction time (20 minutes), incubation temperature (35˚C), substrate concentration (40mM), and enzyme volume (0.5ml). The predicted data was verified by running experiment at simulated optimum assay condition and activity was obtained as 634.00 U/l. The application of ANN model for optimization of assay conditions improved the activity of L-glutaminase by 1.499 fold.

Keywords: Bacillus cereus, L-glutaminase, assay parameters, artificial neural network

Procedia PDF Downloads 429
6595 Directional Dust Deposition Measurements: The Influence of Seasonal Changes and the Meteorological Conditions Influencing in Witbank Area and Carletonville Area

Authors: Maphuti Georgina Kwata

Abstract:

Coal mining in Mpumalanga Province is known of contributing to the atmospheric pollution from various activities. Gold mining in North-West Province is known of also contributing to the atmospheric pollution especially with the production of radon gas. In this research directional dust deposition gauge was used to measure source of direction and meteorological data was used to determine the wind rose blowing and the influence of the seasonal changes. Fourteen months of dust collection was undertaken in Witbank Area and Carletonville Area. The results shows that the sources of direction for Ericson Dam its East in February 2010 and Tip Area shows that the source of direction its West in October 2010. In the East direction there were mining operations, power stations which contributed to the East to be the sources of direction. In the West direction there were smelters, power stations and agricultural activities which contributed for the source of direction to be the West direction for Driefontein Mine: East Recreational Village Club. The East of Leslie Williams hospital is the source of direction which also indicated that there dust generating activities such as mining operation, agricultural activities. The meteorological results for Emalahleni Area in summer and winter the wind rose blow with wind speed of 5-10 ms-1 from the East sector. Annual average for the wind rose blow its East South eastern sector with 20 ms-1 and day time the wind rose from northwestern sector with excess of 20 ms-1. The night time wind direction East-eastern direction with a maximum wind speed of 20 ms-1. The meteorogical results for Driefontein Mine show that North-western sector and north-eastern sector wind rose is blowing with 5-10 ms-1 win speed. Day time wind blows from the West sector and night time wind blows from the north sector. In summer the wind blows North-east sector with 5-10 ms-1 and winter wind blows from North-west and it’s also predominant. In spring wind blows from north-east. The conclusion is that not only mining operation where the directional dust deposit gauge were installed contributed to the source of direction also the power stations, smelters, and other activities nearby the mining operation contributed. The recommendations are the dust suppressant for unpaved roads should be used on a regular basis and there should be monitoring of the weather conditions (the wind speed and direction prior to blasting to ensure minimal emissions).

Keywords: directional dust deposition gauge, BS part 5 1747 dust deposit gauge, wind rose, wind blowing

Procedia PDF Downloads 506
6594 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model

Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis

Abstract:

In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.

Keywords: cause of failure, linear degradation path, reliability function, expectation-maximization algorithm, intensity, masked data

Procedia PDF Downloads 334
6593 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms

Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano

Abstract:

In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.

Keywords: heuristic, MIP model, remedial course, school, timetabling

Procedia PDF Downloads 605
6592 Characterization of Structural Elements in Metal Fiber Concrete

Authors: Ammari Abdelhammid

Abstract:

This work on the characterization of structural elements in metal fiber concrete is devoted to the study of recyclability, as reinforcement for concrete, of chips resulting from the machining of steel parts. We're interested in this study to the Rheological behavior of fresh chips reinforced concrete and its mechanical behavior at a young age. The evaluation of the workability with the LCL workabilimeter shows that optimal sand gravel ratios ( S/G) are S/G = 0.8 and S/G = 1. The study of the content chips (W%) influence on the workability of the concrete shows that the flow time and the S/G optimum increase with W%. For S/G = 1.4, the flow time is practically insensitive to the variation of W%, the concrete behavior is similar to that of self-compacting concrete. Mechanical characterization tests (direct tension, compression, bending, and splitting) show that the mechanical properties of chips concrete are comparable to those of the two selected reference concretes (concrete reinforced with conventional fibers: Eurosteel fibers corrugated and Dramix fibers). Chips provide a significant increase in strength and some ductility in the post-failure behavior of the concrete. Recycling chips as reinforcement for concrete can be favorably considered.

Keywords: fiber concrete, chips, workability, direct tensile test, compression test, bending test, splitting test

Procedia PDF Downloads 442
6591 Corrosion Characterization of Al6061 Hybrid Metal Matrix Composites in Acid Medium

Authors: P. V. Krupakara

Abstract:

This paper deals with the high corrosion resistance developed by the hybrid metal matrix composites when compared with that of matrix alloy. Matrix selected is Al6061. Reinforcements selected are graphite and red mud particulates. The composites are prepared using liquid melt metallurgy technique using vortex method. Metal matrix composites containing 2 percent graphite and 2 percent red mud, 2 percent graphite and 4 percent red mud, 2 percent graphite and 6 percent of red mud are prepared. Bar castings are cut into cylindrical discs of 20mm diameter and 20mm thickness. Corrosion tests were conducted at room temperature (230 °C) using conventional weight loss method according to ASTM G69-80. The corrodents used for the test were hydrochloric acid solution of different concentrations. Specimens were tested for every 24 hours interval up to 96 hours. Four specimens for each condition and time were immersed in corrodent. In each case the corrosion rate decreases with increase in exposure time for matrix and metal matrix composites whatever may be the concentration of hydrochloric acid. This may be due to aluminium, which may induce passivation due to development of non-porous layer. As red mud content increases the composites become corrosion resistant due to insulating nature of ceramic material red mud and less exposure of matrix alloy in those metal matrix composites.

Keywords: Al6061, graphite, passivation, red mud, vortex

Procedia PDF Downloads 542
6590 Analysis of Mechanisms for Design of Add-On Device to Assist in Stair Climbing of Wheelchairs

Authors: Manish Kumar Prajapat, Vishwajeet Sikchi

Abstract:

In the present scenario, many motorized stair climbing wheelchairs are available in the western countries which are significantly expensive and hence are not popular in developing countries. Also, such wheelchairs tend to be bulkier and heavy which makes their use for normal conditions difficult. Manually operated solutions are rarely explored in this space. Therefore, this project aims at developing a manually operated cost effective solution for the same. Differently abled people are not required to climb stairs frequently in their daily use. Because of this, carrying a stair climbing mechanism attached to the wheelchair permanently adds redundant weight to the wheelchair which reduces ease of use of the wheelchair. Hence, the idea of add-on device for stair climbing was envisaged wherein the wheelchair is mounted onto add-on only at the time when climbing the stairs is required. This work analyses in detail the mechanism for stair climbing of conventional wheelchair followed by analysis and iterations on multiple mechanisms to identify the most suitable mechanism for application in the add-on device. Further, this work imparts specific attention to optimize the force and time required for stair climbing of wheelchairs. The most suitable mechanism identified was validated by building and testing a prototype.

Keywords: add-on device, Rocker-Bogie, stair climbing, star wheel, y wheel

Procedia PDF Downloads 212
6589 Deep Learning Approaches for Accurate Detection of Epileptic Seizures from Electroencephalogram Data

Authors: Ramzi Rihane, Yassine Benayed

Abstract:

Epilepsy is a chronic neurological disorder characterized by recurrent, unprovoked seizures resulting from abnormal electrical activity in the brain. Timely and accurate detection of these seizures is essential for improving patient care. In this study, we leverage the UK Bonn University open-source EEG dataset and employ advanced deep-learning techniques to automate the detection of epileptic seizures. By extracting key features from both time and frequency domains, as well as Spectrogram features, we enhance the performance of various deep learning models. Our investigation includes architectures such as Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), 1D Convolutional Neural Networks (1D-CNN), and hybrid CNN-LSTM and CNN-BiLSTM models. The models achieved impressive accuracies: LSTM (98.52%), Bi-LSTM (98.61%), CNN-LSTM (98.91%), CNN-BiLSTM (98.83%), and CNN (98.73%). Additionally, we utilized a data augmentation technique called SMOTE, which yielded the following results: CNN (97.36%), LSTM (97.01%), Bi-LSTM (97.23%), CNN-LSTM (97.45%), and CNN-BiLSTM (97.34%). These findings demonstrate the effectiveness of deep learning in capturing complex patterns in EEG signals, providing a reliable and scalable solution for real-time seizure detection in clinical environments.

Keywords: electroencephalogram, epileptic seizure, deep learning, LSTM, CNN, BI-LSTM, seizure detection

Procedia PDF Downloads 14
6588 A Cooperative Signaling Scheme for Global Navigation Satellite Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.

Keywords: global navigation satellite network, cooperative signaling, data combining, nodes

Procedia PDF Downloads 280
6587 Analysis of Performance of 3T1D Dynamic Random-Access Memory Cell

Authors: Nawang Chhunid, Gagnesh Kumar

Abstract:

On-chip memories consume a significant portion of the overall die space and power in modern microprocessors. On-chip caches depend on Static Random-Access Memory (SRAM) cells and scaling of technology occurring as per Moore’s law. Unfortunately, the scaling is affecting stability, performance, and leakage power which will become major problems for future SRAMs in aggressive nanoscale technologies due to increasing device mismatch and variations. 3T1D Dynamic Random-Access Memory (DRAM) cell is a non-destructive read DRAM cell with three transistors and a gated diode. In 3T1D DRAM cell gated diode (D1) acts as a storage device and also as an amplifier, which leads to fast read access. Due to its high tolerance to process variation, high density, and low cost of memory as compared to 6T SRAM cell, it is universally used by the advanced microprocessor for on chip data and program memory. In the present paper, it has been shown that 3T1D DRAM cell can perform better in terms of fast read access as compared to 6T, 4T, 3T SRAM cells, respectively.

Keywords: DRAM Cell, Read Access Time, Retention Time, Average Power dissipation

Procedia PDF Downloads 313
6586 Customer Adoption and Attitudes in Mobile Banking in Sri Lanka

Authors: Prasansha Kumari

Abstract:

This paper intends to identify and analyze customer adoption and attitudes towards mobile banking facilities. The study uses six perceived characteristics of innovation that can be used to form a favorable or unfavorable attitude toward an innovation, namely: Relative advantage, compatibility, complexity, trailability, risk, and observability. Collected data were analyzed using Pearson Chi-Square test. The results showed that mobile bank users were predominantly males. There is a growing trend among young, educated customers towards converting to mobile banking in Sri Lanka. The research outcomes suggested that all the six factors are statistically highly significant in influencing mobile banking adoption and attitude formation towards mobile banking in Sri Lanka. The major reasons for adopting mobile banking services are the accessibility and availability of services regardless of time and place. Over the 75 percent of the respondents mentioned that savings in time and effort and low financial costs of conducting mobile banking were advantageous. Issue of security was found to be the most important factor that motivated consumer adoption and attitude formation towards mobile banking. Main barriers to mobile banking were the lack of technological skills, the traditional cash‐carry banking culture, and the lack of awareness and insufficient guidance to using mobile banking.

Keywords: compatibility, complexity, mobile banking, observability, risk

Procedia PDF Downloads 203
6585 Prioritizing Roads Safety Based on the Quasi-Induced Exposure Method and Utilization of the Analytical Hierarchy Process

Authors: Hamed Nafar, Sajad Rezaei, Hamid Behbahani

Abstract:

Safety analysis of the roads through the accident rates which is one of the widely used tools has been resulted from the direct exposure method which is based on the ratio of the vehicle-kilometers traveled and vehicle-travel time. However, due to some fundamental flaws in its theories and difficulties in gaining access to the data required such as traffic volume, distance and duration of the trip, and various problems in determining the exposure in a specific time, place, and individual categories, there is a need for an algorithm for prioritizing the road safety so that with a new exposure method, the problems of the previous approaches would be resolved. In this way, an efficient application may lead to have more realistic comparisons and the new method would be applicable to a wider range of time, place, and individual categories. Therefore, an algorithm was introduced to prioritize the safety of roads using the quasi-induced exposure method and utilizing the analytical hierarchy process. For this research, 11 provinces of Iran were chosen as case study locations. A rural accidents database was created for these provinces, the validity of quasi-induced exposure method for Iran’s accidents database was explored, and the involvement ratio for different characteristics of the drivers and the vehicles was measured. Results showed that the quasi-induced exposure method was valid in determining the real exposure in the provinces under study. Results also showed a significant difference in the prioritization based on the new and traditional approaches. This difference mostly would stem from the perspective of the quasi-induced exposure method in determining the exposure, opinion of experts, and the quantity of accidents data. Overall, the results for this research showed that prioritization based on the new approach is more comprehensive and reliable compared to the prioritization in the traditional approach which is dependent on various parameters including the driver-vehicle characteristics.

Keywords: road safety, prioritizing, Quasi-induced exposure, Analytical Hierarchy Process

Procedia PDF Downloads 338
6584 Real-Time Network Anomaly Detection Systems Based on Machine-Learning Algorithms

Authors: Zahra Ramezanpanah, Joachim Carvallo, Aurelien Rodriguez

Abstract:

This paper aims to detect anomalies in streaming data using machine learning algorithms. In this regard, we designed two separate pipelines and evaluated the effectiveness of each separately. The first pipeline, based on supervised machine learning methods, consists of two phases. In the first phase, we trained several supervised models using the UNSW-NB15 data-set. We measured the efficiency of each using different performance metrics and selected the best model for the second phase. At the beginning of the second phase, we first, using Argus Server, sniffed a local area network. Several types of attacks were simulated and then sent the sniffed data to a running algorithm at short intervals. This algorithm can display the results of each packet of received data in real-time using the trained model. The second pipeline presented in this paper is based on unsupervised algorithms, in which a Temporal Graph Network (TGN) is used to monitor a local network. The TGN is trained to predict the probability of future states of the network based on its past behavior. Our contribution in this section is introducing an indicator to identify anomalies from these predicted probabilities.

Keywords: temporal graph network, anomaly detection, cyber security, IDS

Procedia PDF Downloads 103
6583 Moving Object Detection Using Histogram of Uniformly Oriented Gradient

Authors: Wei-Jong Yang, Yu-Siang Su, Pau-Choo Chung, Jar-Ferr Yang

Abstract:

Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones.

Keywords: moving object detection, histogram of oriented gradient, histogram of uniformly-oriented gradient, linear support vector machine

Procedia PDF Downloads 594
6582 An Intelligent Nondestructive Testing System of Ultrasonic Infrared Thermal Imaging Based on Embedded Linux

Authors: Hao Mi, Ming Yang, Tian-yue Yang

Abstract:

Ultrasonic infrared nondestructive testing is a kind of testing method with high speed, accuracy and localization. However, there are still some problems, such as the detection requires manual real-time field judgment, the methods of result storage and viewing are still primitive. An intelligent non-destructive detection system based on embedded linux is put forward in this paper. The hardware part of the detection system is based on the ARM (Advanced Reduced Instruction Set Computer Machine) core and an embedded linux system is built to realize image processing and defect detection of thermal images. The CLAHE algorithm and the Butterworth filter are used to process the thermal image, and then the boa server and CGI (Common Gateway Interface) technology are used to transmit the test results to the display terminal through the network for real-time monitoring and remote monitoring. The system also liberates labor and eliminates the obstacle of manual judgment. According to the experiment result, the system provides a convenient and quick solution for industrial non-destructive testing.

Keywords: remote monitoring, non-destructive testing, embedded Linux system, image processing

Procedia PDF Downloads 224
6581 Failure Analysis of Pipe System at a Hydroelectric Power Plant

Authors: Ali Göksenli, Barlas Eryürek

Abstract:

In this study, failure analysis of pipe system at a micro hydroelectric power plant is investigated. Failure occurred at the pipe system in the powerhouse during shut down operation of the water flow by a valve. This locking had caused a sudden shock wave, also called “Water-hammer effect”, resulting in noise and inside pressure increase. After visual investigation of the effect of the shock wave on the system, a circumference crack was observed at the pipe flange weld region. To establish the reason for crack formation, calculations of pressure and stress values at pipe, flange and welding seams were carried out and concluded that safety factor was high (2.2), indicating that no faulty design existed. By further analysis, pipe system and hydroelectric power plant was examined. After observations it is determined that the plant did not include a ventilation nozzle (air trap), that prevents the system of sudden pressure increase inside the pipes which is caused by water-hammer effect. Analyses were carried out to identify the influence of water-hammer effect on inside pressure increase and it was concluded that, according Jowkowsky’s equation, shut down time is effective on inside pressure increase. The valve closing time was uncertain but by a shut down time of even one minute, inside pressure would increase by 7.6 bar (working pressure was 34.6 bar). Detailed investigations were also carried out on the assembly of the pipe-flange system by considering technical drawings. It was concluded that the pipe-flange system was not installed according to the instructions. Two of five weld seams were not applied and one weld was carried out faulty. This incorrect and inadequate weld seams resulted in; insufficient connection of the pipe to the flange constituting a strong notch effect at weld seam regions, increase in stress values and the decrease of strength and safety factor

Keywords: failure analysis, hydroelectric plant, crack, shock wave, welding seam

Procedia PDF Downloads 344
6580 Model Based Fault Diagnostic Approach for Limit Switches

Authors: Zafar Mahmood, Surayya Naz, Nazir Shah Khattak

Abstract:

The degree of freedom relates to our capability to observe or model the energy paths within the system. Higher the number of energy paths being modeled leaves to us a higher degree of freedom, but increasing the time and modeling complexity rendering it useless for today’s world’s need for minimum time to market. Since the number of residuals that can be uniquely isolated are dependent on the number of independent outputs of the system, increasing the number of sensors required. The examples of discrete position sensors that may be used to form an array include limit switches, Hall effect sensors, optical sensors, magnetic sensors, etc. Their mechanical design can usually be tailored to fit in the transitional path of an STME in a variety of mechanical configurations. The case studies into multi-sensor system were carried out and actual data from sensors is used to test this generic framework. It is being investigated, how the proper modeling of limit switches as timing sensors, could lead to unified and neutral residual space while keeping the implementation cost reasonably low.

Keywords: low-cost limit sensors, fault diagnostics, Single Throw Mechanical Equipment (STME), parameter estimation, parity-space

Procedia PDF Downloads 617
6579 Enhancing the Performance of Bug Reporting System by Handling Duplicate Reporting Reports: Artificial Intelligence Based Mantis

Authors: Afshan Saad, Muhammad Saad, Shah Muhammad Emaduddin

Abstract:

Bug reporting systems are most important tool that guides regarding different maintenance activities in software engineering. Duplicate bug reports which describe the bugs and issues in bug reporting system repository increases processing time of bug triage that monitors all such activities and software programmers who are working and spending time on reports which were assigned by triage. These reports can reveal imperfections and degrade software quality. As there is a number of the potential duplicate bug reports increases, the number of bug reports in bug repository increases. Identifying duplicate bug reports help in decreasing development work load in fixing defects. However, it is difficult to manually identify all possible duplicates because of the huge number of already reported bug reports. In this paper, an artificial intelligence based system using Mantis is proposed to automatically detect duplicate bug reports. When new bugs are submitted to repository triages will mark it with a tag. It will investigate that whether it is a duplicate of an existing bug report by matching or not. Reports with duplicate tags will be eliminated from the repository which not only will improve the performance of the system but can also save cost and effort waste on bug triage and finding the duplicate bug.

Keywords: bug tracking, triager, tool, quality assurance

Procedia PDF Downloads 194
6578 Ubiquitous Life People Informatics Engine (U-Life PIE): Wearable Health Promotion System

Authors: Yi-Ping Lo, Shi-Yao Wei, Chih-Chun Ma

Abstract:

Since Google launched Google Glass in 2012, numbers of commercial wearable devices were released, such as smart belt, smart band, smart shoes, smart clothes ... etc. However, most of these devices perform as sensors to show the readings of measurements and few of them provide the interactive feedback to the user. Furthermore, these devices are single task devices which are not able to communicate with each other. In this paper a new health promotion system, Ubiquitous Life People Informatics Engine (U-Life PIE), will be presented. This engine consists of People Informatics Engine (PIE) and the interactive user interface. PIE collects all the data from the compatible devices, analyzes this data comprehensively and communicates between devices via various application programming interfaces. All the data and informations are stored on the PIE unit, therefore, the user is able to view the instant and historical data on their mobile devices any time. It also provides the real-time hands-free feedback and instructions through the user interface visually, acoustically and tactilely. These feedback and instructions suggest the user to adjust their posture or habits in order to avoid the physical injuries and prevent illness.

Keywords: machine learning, wearable devices, user interface, user experience, internet of things

Procedia PDF Downloads 294
6577 Cold Crystallization of Poly (Ether Ether Ketone)/Graphene Composites by Time-Resolved Synchrotron X-Ray Diffraction

Authors: A. Alvaredo , R. Guzman De Villoria, P. Castell, Juan P. Fernandez-Blazquez

Abstract:

Since graphene was discovered in 2004, has been considered as superb material, due to its outstanding mechanical, electrical and thermal properties. Graphene has been incorporated as reinforcement in several high performance polymers in order to obtain a good balance of properties and to get new properties as thermal or electric conductivity. As well known, the properties of semicrystalline polymer and its composites depends heavily on degree of crystallinity. In this context, our research group has studied the crystallization behavior from amorphous state of PEEK/GNP composites. The monitoring of cold crystallization processes studied by time-resolved simultaneous wide-angle X-ray scattering (WAXS) and small-angle X-ray scattering (SAXS). These techniques allowed to get an extremely relevant information about the evolution of the morphology of the PEEK/GNP composites. In addition, the thermal evolution of cold crystallization was followed by differential scanning calorimetry (DSC) as well. The experimental results showed changes in crystallization kinetics and c parameter unit cell when adding graphene. The main aim of this work is to produce PEEK/GNP composites and characterize their morphology, unit cell parameters and crystallization kinetic.

Keywords: PEEK, graphene, synchrotron, cold crystallization

Procedia PDF Downloads 349
6576 Masked Candlestick Model: A Pre-Trained Model for Trading Prediction

Authors: Ling Qi, Matloob Khushi, Josiah Poon

Abstract:

This paper introduces a pre-trained Masked Candlestick Model (MCM) for trading time-series data. The pre-trained model is based on three core designs. First, we convert trading price data at each data point as a set of normalized elements and produce embeddings of each element. Second, we generate a masked sequence of such embedded elements as inputs for self-supervised learning. Third, we use the encoder mechanism from the transformer to train the inputs. The masked model learns the contextual relations among the sequence of embedded elements, which can aid downstream classification tasks. To evaluate the performance of the pre-trained model, we fine-tune MCM for three different downstream classification tasks to predict future price trends. The fine-tuned models achieved better accuracy rates for all three tasks than the baseline models. To better analyze the effectiveness of MCM, we test the same architecture for three currency pairs, namely EUR/GBP, AUD/USD, and EUR/JPY. The experimentation results demonstrate MCM’s effectiveness on all three currency pairs and indicate the MCM’s capability for signal extraction from trading data.

Keywords: masked language model, transformer, time series prediction, trading prediction, embedding, transfer learning, self-supervised learning

Procedia PDF Downloads 129
6575 Analyzing Current Transformers Saturation Characteristics for Different Connected Burden Using LabVIEW Data Acquisition Tool

Authors: D. Subedi, S. Pradhan

Abstract:

Current transformers are an integral part of power system because it provides a proportional safe amount of current for protection and measurement applications. However when the power system experiences an abnormal situation leading to huge current flow, then this huge current is proportionally injected to the protection and metering circuit. Since the protection and metering equipment’s are designed to withstand only certain amount of current with respect to time, these high currents pose a risk to man and equipment. Therefore during such instances, the CT saturation characteristics have a huge influence on the safety of both man and equipment and also on the reliability of the protection and metering system. This paper shows the effect of burden on the Accuracy Limiting factor/ Instrument security factor of current transformers and also the change in saturation characteristics of the CT’s. The response of the CT to varying levels of overcurrent at different connected burden will be captured using the data acquisition software LabVIEW. Analysis is done on the real time data gathered using LabVIEW. Variation of current transformer saturation characteristics with changes in burden will be discussed.

Keywords: accuracy limiting factor, burden, current transformer, instrument security factor, saturation characteristics

Procedia PDF Downloads 415
6574 Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line

Authors: Fidel Lòpez Saca, Carlos Avilés-Cruz, Miguel Magos-Rivera, José Antonio Lara-Chávez

Abstract:

Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor’s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time.

Keywords: deep-learning, image classification, image identification, industrial engineering.

Procedia PDF Downloads 161
6573 Operational Excellence Performance in Pharmaceutical Quality Control Labs: An Empirical Investigation of the Effectiveness and Efficiency Relation

Authors: Stephan Koehler, Thomas Friedli

Abstract:

Performance measurement has evolved over time from a unidimensional short-term efficiency focused approach into a balanced multidimensional approach. Today, integrated performance measurement frameworks are often used to avoid local optimization and to encourage continuous improvement of an organization. In literature, the multidimensional characteristic of performance measurement is often described by competitive priorities. At the same time, on the highest abstraction level an effectiveness and efficiency dimension of performance measurement can be distinguished. This paper aims at a better understanding of the composition of effectiveness and efficiency and their relation in pharmaceutical quality control labs. The research comprises a lab-specific operationalization of effectiveness and efficiency and examines how the two dimensions are interlinked. The basis for the analysis represents a database of the University of St. Gallen including a divers set of 40 different pharmaceutical quality control labs. The research provides empirical evidence that labs with a high effectiveness also accompany a high efficiency. Lab effectiveness explains 29.5 % of the variance in lab efficiency. In addition, labs with an above median operational excellence performance have a statistically significantly higher lab effectiveness and lab efficiency compared to the below median performing labs.

Keywords: empirical study, operational excellence, performance measurement, pharmaceutical quality control lab

Procedia PDF Downloads 161
6572 Application of Principle Component Analysis for Classification of Random Doppler-Radar Targets during the Surveillance Operations

Authors: G. C. Tikkiwal, Mukesh Upadhyay

Abstract:

During the surveillance operations at war or peace time, the Radar operator gets a scatter of targets over the screen. This may be a tracked vehicle like tank vis-à-vis T72, BMP etc, or it may be a wheeled vehicle like ALS, TATRA, 2.5Tonne, Shaktiman or moving army, moving convoys etc. The Radar operator selects one of the promising targets into Single Target Tracking (STT) mode. Once the target is locked, the operator gets a typical audible signal into his headphones. With reference to the gained experience and training over the time, the operator then identifies the random target. But this process is cumbersome and is solely dependent on the skills of the operator, thus may lead to misclassification of the object. In this paper we present a technique using mathematical and statistical methods like Fast Fourier Transformation (FFT) and Principal Component Analysis (PCA) to identify the random objects. The process of classification is based on transforming the audible signature of target into music octave-notes. The whole methodology is then automated by developing suitable software. This automation increases the efficiency of identification of the random target by reducing the chances of misclassification. This whole study is based on live data.

Keywords: radar target, fft, principal component analysis, eigenvector, octave-notes, dsp

Procedia PDF Downloads 346
6571 Leaching of Metal Cations from Basic Oxygen Furnace (BOF) Steelmaking Slag Immersed in Water

Authors: Umashankar Morya, Somnath Basu

Abstract:

Metalloids like arsenic are often present as contaminants in industrial effluents. Removal of the same is essential before the safe discharge of the wastewater into the environment. Otherwise, these pollutants tend to percolate into aquifers over a period of time and contaminate drinking water sources. Several adsorbents, including metal powders, carbon nanotubes and zeolites, are being used for this purpose, with varying degrees of success. However, most of these solutions are not only costly but also not always readily available. This restricts their use, especially among financially weaker communities. Slag generated globally from primary steelmaking operations exceeds 200 billion kg every year. Some of it is utilized for applications like road construction, filler in reinforced concrete, railway track ballast and recycled into iron ore agglomeration processes. However, these usually involve low-value addition, and a significant amount of the slag still ends up in a landfill. However, there is a strong possibility that the constituents in the steelmaking slag may immobilize metalloid contaminants present in wastewater through a combination of adsorption and precipitation of insoluble product(s). Preliminary experiments have already indicated that exposure to basic oxygen steelmaking slag does reduce pollutant concentration in wastewater. In addition, the slag is relatively inexpensive and available in large quantities and in several countries across the world. Investigations on the mechanism of interactions at the water-solid interfaces have been in progress for some time. However, at the same time, there are concerns about the possibility of leaching of metal ions from the slag particles in concentrations greater than what exists in the water bodies where the “treated” wastewater would eventually be discharged. The effect of such leached ions on the aquatic flora and fauna is yet uncertain. This has prompted the present investigation, which focuses on the leaching of metal ions from steelmaking slag particles in contact with wastewater, and the influence of these ions on the removal of contaminant species. Experiments were carried out to quantify the leaching behavior of different ionic species upon exposure of the slag particles to simulated wastewater, both with and without specific metalloid contaminants.

Keywords: slag, water, metalloid, heavy metal, wastewater

Procedia PDF Downloads 75