Search results for: process operation
16827 Optimal Tamping for Railway Tracks, Reducing Railway Maintenance Expenditures by the Use of Integer Programming
Authors: Rui Li, Min Wen, Kim Bang Salling
Abstract:
For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euros per kilometer per year. In order to reduce such maintenance expenditures, this paper presents a mixed 0-1 linear mathematical model designed to optimize the predictive railway tamping activities for ballast track in the planning horizon of three to four years. The objective function is to minimize the tamping machine actual costs. The approach of the research is using the simple dynamic model for modelling condition-based tamping process and the solution method for finding optimal condition-based tamping schedule. Seven technical and practical aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality recovery on the track quality after tamping operation; (5) Tamping machine operation practices (6) tamping budgets and (7) differentiating the open track from the station sections. A Danish railway track between Odense and Fredericia with 42.6 km of length is applied for a time period of three and four years in the proposed maintenance model. The generated tamping schedule is reasonable and robust. Based on the result from the Danish railway corridor, the total costs can be reduced significantly (50%) than the previous model which is based on optimizing the number of tamping. The different maintenance strategies have been discussed in the paper. The analysis from the results obtained from the model also shows a longer period of predictive tamping planning has more optimal scheduling of maintenance actions than continuous short term preventive maintenance, namely yearly condition-based planning.Keywords: integer programming, railway tamping, predictive maintenance model, preventive condition-based maintenance
Procedia PDF Downloads 44616826 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 7816825 Application of Soft Systems Methodology in Solving Disaster Emergency Logistics Problems
Authors: Alhasan Hakami, Arun Kumar, Sung J. Shim, Yousef Abu Nahleh
Abstract:
In recent years, many high intensity earthquakes have occurred around the world, such as the 2011 earthquake in Tohoku, Japan. These large-scale disasters caused huge casualties and losses. In addition, inefficient disaster response operations also caused the second wave of casualties and losses, and expanded the damage. Effective disaster management can be used to respond to the chaotic situation, and reduce the damage. However, some inefficient disaster response operations are still used. Therefore, this case study chose the 921 earthquakes for analysing disaster emergency logistics problems and proposed the Soft Systems Methodology (SSM) to solve disaster emergency logistics problems. Moreover, it analyses the effect of human factors on system operation, and suggests a solution to improve the system.Keywords: soft systems methodology, emergency logistics, earthquakes, Japan, system operation
Procedia PDF Downloads 44016824 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators
Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy
Abstract:
Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators
Procedia PDF Downloads 11316823 Decision Tree Based Scheduling for Flexible Job Shops with Multiple Process Plans
Authors: H.-H. Doh, J.-M. Yu, Y.-J. Kwon, J.-H. Shin, H.-W. Kim, S.-H. Nam, D.-H. Lee
Abstract:
This paper suggests a decision tree based approach for flexible job shop scheduling with multiple process plans, i. e. each job can be processed through alternative operations, each of which can be processed on alternative machines. The main decision variables are: (a) selecting operation/machine pair; and (b) sequencing the jobs assigned to each machine. As an extension of the priority scheduling approach that selects the best priority rule combination after many simulation runs, this study suggests a decision tree based approach in which a decision tree is used to select a priority rule combination adequate for a specific system state and hence the burdens required for developing simulation models and carrying out simulation runs can be eliminated. The decision tree based scheduling approach consists of construction and scheduling modules. In the construction module, a decision tree is constructed using a four-stage algorithm, and in the scheduling module, a priority rule combination is selected using the decision tree. To show the performance of the decision tree based approach suggested in this study, a case study was done on a flexible job shop with reconfigurable manufacturing cells and a conventional job shop, and the results are reported by comparing it with individual priority rule combinations for the objectives of minimizing total flow time and total tardiness.Keywords: flexible job shop scheduling, decision tree, priority rules, case study
Procedia PDF Downloads 35816822 Assessing the Benefits of Super Depo Sutorejo as a Model of integration of Waste Pickers in a Sustainable City Waste Management
Authors: Yohanes Kambaru Windi, Loetfia Dwi Rahariyani, Dyah Wijayanti, Eko Rustamaji
Abstract:
Surabaya, the second largest city in Indonesia, has been struggling for years with waste production and its management. Nearly 11,000 tons of waste are generated daily by domestic, commercial and industrial areas. It is predicted that approximately 1,300 tons of waste overflew the Benowo Landfill daily in 2013 and projected that the landfill operation will be critical in 2015. The Super Depo Sutorejo (SDS) is a pilot project on waste management launched by the government of Surabaya in March 2013. The project is aimed to reduce the amount of waste dumped in landfill by sorting the recyclable and organic waste for composting by employing waste pickers to sort the waste before transported to landfill. This study is intended to assess the capacity of SDS to process and reduce waste and its complementary benefits. It also overviews the benefits of the project to the waste pickers in term of satisfaction to the job. Waste processing data-sheets were used to assess the difference between input and outputs waste. A survey was distributed to 30 waste pickers and interviews were conducted as a further insight on a particular issue. The analysis showed that SDS enable to reduce waste up to 50% before dumped in the final disposal area. The cost-benefits analysis using cost differential calculation revealed the economic benefit is considerable low, but composting may substitute tangible benefits for maintain the city’s parks. Waste pickers are mostly satisfied with their job (i.e. Salary, health coverage, job security), services and facilities available in SDS and enjoyed rewarding social life within the project. It is concluded that SDS is an effective and efficient model for sustainable waste management and reliable to be developed in developing countries. It is a strategic approach to empower and open up working opportunity for the poor urban community and prolong the operation of landfills.Keywords: cost-benefits, integration, satisfaction, waste management
Procedia PDF Downloads 47616821 Reversible and Irreversible Wrinkling in Tube Hydroforming Process
Authors: Ali Abd El-Aty, Ahmed Tauseef, Ahmad Farooq
Abstract:
This research aims at analyzing and optimizing the hydroforming process parameters to achieve a sound bulged tube without failure. Theoretical constitutive model is formulated to develop a working diagram including process window, which represents the optimize region to carry out the hydroforming process and predict the type of tube failure during the process accurately. The model is applied into different bulging ratios for low carbon steel (C1010). From this study, it is concluded that the tubes with bulging ratios up to 50% and 70% are successfully formed without defects. The tubes with bulging ratio of 90% are successfully formed by hydroforming with optimized the loading path (axial feed versus internal pressure) within the process window. The working diagram is modified due to different types of formation of wrinkling during the hydroforming process. The formation of wrinkles with increasing axial feed can be useful in terms of the achievement of higher bulging ratio and/or less thinning and this type of wrinkles can be overcome through the internal pressure in the later stage of the hydroforming process. On the other hand, the formation of wrinkles may be harmful, if it cannot be reversed.Keywords: finite element, hydroforming, process window, wrinkling
Procedia PDF Downloads 28016820 Development and Validation of Research Process for Enhancing Humanities Competence of Medical Students
Authors: S. J. Yune, K. H. Park
Abstract:
The purpose of this study was to examine the validity of the research process for enhancing the humanities competence of the medical students. The research process was developed to be operated as a core subject course of 3 semesters. Among them, the research process for enhancing humanities capacity consisted of humanities and societies (6 teams) and education-psychology (2teams). The subjects of this study were 88-second grade students and 22 professors who participated in the research process. Among them, 13 professors participated in the study of humanities and 37 students. In the validity test, the professors were more likely to have more validity in the research process than the students in all areas of logic (p = .001), influence (p = .037), process (p = .001). The validity of the professor was higher than that of the students. The professors highly evaluated the students' learning outcomes and showed the most frequency to the prize group. As a result of analyzing the agreement between the students and the professors through the Kappa coefficient, the agreement degree of communication and cooperation competence was moderate to .430. Problem-solving ability was .340, which showed a fair degree of agreement. However, other factors showed only a slight degree of agreement of less than .20.Keywords: research process, medical school, humanities competence, validity verification
Procedia PDF Downloads 19516819 Public Procurement Development Stages in Georgia
Authors: Giorgi Gaprindashvili
Abstract:
One of the best examples, in evolution of the public procurement, from post-soviet countries are reforms carried out in Georgia, which brought them close to international standards of procurement. In Georgia, public procurement legislation started functioning in 1998. The reform has passed several stages and came in the form as it is today. It should also be noted, that countries with economy in transition, including Georgia, implemented all the reforms in public procurement based on recommendations and support of World Bank, the United Nations and other international organizations. The first law on public procurement in Georgia was adopted on December 9, 1998 which aimed regulation of the procurement process of budget-organizations, transparent and competitive environment for private companies to access state funds legally. The priorities were identified quite clearly in the wording of the law, but operation/function of this law could not be reached on its level, because of some objective and subjective reasons. The high level of corruption in all levels of governance, can be considered as a main obstacle reason and of course, it is natural, that it had direct impact on the procurement process, as well as on transparency and rational use of state funds. This circumstances were the reasons that reforms in this sphere continued, to improve procurement process, in particular, the first wave of reforms began in 2001. Public procurement agency carried out reform with World Bank with main purpose of smartening the procurement legislation and its harmonization with international treaties and agreements. Also with the support of World Bank various activities were carried out to raise awareness of participants involved in procurement system. Further major changes in the legislation were filed in May 2005, which was also directed towards the improvement and smarten of the procurement process. The third wave of the reform began in 2010, which more or less guaranteed the transparency of the procurement process, which later became the basis for the rational spending of state funds. The reform of the procurement system completely changed the procedures. Carried out reform in Georgia resulted in introducing new electronic tendering system, which benefit the transparency of the process, after this became the basis for the further development of a competitive environment, which become a prerequisite for the state rational spending. Increased number of supplier organizations participating in the procurement process resulted in reduction of the estimated cost and the actual cost from 20% up to 40%, it is quite large saving for the procuring organizations and allows them to use the freed-up funds for their other needs. Assessment of the reforms in Georgia in the field of public procurement can be concluded, that proper regulation of the sector and relevant policy may proceed to rational and transparent spending of the budget from country’s state institutions. Also, the business sector has the opportunity to work in competitive market conditions and to make a preliminary analysis, which is a prerequisite for future strategy and development.Keywords: public administration, public procurement, reforms, transparency
Procedia PDF Downloads 36816818 Determination of Frequency Relay Setting during Distributed Generators Islanding
Authors: Tarek Kandil, Ameen Ali
Abstract:
Distributed generation (DG) has recently gained a lot of momentum in power industry due to market deregulation and environmental concerns. One of the most technical challenges facing DGs is islanding of distributed generators. The current industry practice is to disconnect all distributed generators immediately after the occurrence of islands within 200 to 350 ms after loss of main supply. To achieve such goal, each DG must be equipped with an islanding detection device. Frequency relays are one of the most commonly used loss of mains detection method. However, distribution utilities may be faced with concerns related to false operation of these frequency relays due to improper settings. The commercially available frequency relays are considering standard tight setting. This paper investigates some factors related to relays internal algorithm that contribute to their different operating responses. Further, the relay operation in the presence of multiple distributed at the same network is analyzed. Finally, the relay setting can be accurately determined based on these investigation and analysis.Keywords: frequency relay, distributed generation, islanding detection, relay setting
Procedia PDF Downloads 53416817 The Effect of Online Analyzer Malfunction on the Performance of Sulfur Recovery Unit and Providing a Temporary Solution to Reduce the Emission Rate
Authors: Hamid Reza Mahdipoor, Mehdi Bahrami, Mohammad Bodaghi, Seyed Ali Akbar Mansoori
Abstract:
Nowadays, with stricter limitations to reduce emissions, considerable penalties are imposed if pollution limits are exceeded. Therefore, refineries, along with focusing on improving the quality of their products, are also focused on producing products with the least environmental impact. The duty of the sulfur recovery unit (SRU) is to convert H₂S gas coming from the upstream units to elemental sulfur and minimize the burning of sulfur compounds to SO₂. The Claus process is a common process for converting H₂S to sulfur, including a reaction furnace followed by catalytic reactors and sulfur condensers. In addition to a Claus section, SRUs usually consist of a tail gas treatment (TGT) section to decrease the concentration of SO₂ in the flue gas below the emission limits. To operate an SRU properly, the flow rate of combustion air to the reaction furnace must be adjusted so that the Claus reaction is performed according to stoichiometry. Accurate control of the air demand leads to an optimum recovery of sulfur during the flow and composition fluctuations in the acid gas feed. Therefore, the major control system in the SRU is the air demand control loop, which includes a feed-forward control system based on predetermined feed flow rates and a feed-back control system based on the signal from the tail gas online analyzer. The use of online analyzers requires compliance with the installation and operation instructions. Unfortunately, most of these analyzers in Iran are out of service for different reasons, like the low importance of environmental issues and a lack of access to after-sales services due to sanctions. In this paper, an SRU in Iran was simulated and calibrated using industrial experimental data. Afterward, the effect of the malfunction of the online analyzer on the performance of SRU was investigated using the calibrated simulation. The results showed that an increase in the SO₂ concentration in the tail gas led to an increase in the temperature of the reduction reactor in the TGT section. This increase in temperature caused the failure of TGT and increased the concentration of SO₂ from 750 ppm to 35,000 ppm. In addition, the lack of a control system for the adjustment of the combustion air caused further increases in SO₂ emissions. In some processes, the major variable cannot be controlled directly due to difficulty in measurement or a long delay in the sampling system. In these cases, a secondary variable, which can be measured more easily, is considered to be controlled. With the correct selection of this variable, the main variable is also controlled along with the secondary variable. This strategy for controlling a process system is referred to as inferential control" and is considered in this paper. Therefore, a sensitivity analysis was performed to investigate the sensitivity of other measurable parameters to input disturbances. The results revealed that the output temperature of the first Claus reactor could be used for inferential control of the combustion air. Applying this method to the operation led to maximizing the sulfur recovery in the Claus section.Keywords: sulfur recovery, online analyzer, inferential control, SO₂ emission
Procedia PDF Downloads 7616816 A Reinforcement Learning Approach for Evaluation of Real-Time Disaster Relief Demand and Network Condition
Authors: Ali Nadi, Ali Edrissi
Abstract:
Relief demand and transportation links availability is the essential information that is needed for every natural disaster operation. This information is not in hand once a disaster strikes. Relief demand and network condition has been evaluated based on prediction method in related works. Nevertheless, prediction seems to be over or under estimated due to uncertainties and may lead to a failure operation. Therefore, in this paper a stochastic programming model is proposed to evaluate real-time relief demand and network condition at the onset of a natural disaster. To address the time sensitivity of the emergency response, the proposed model uses reinforcement learning for optimization of the total relief assessment time. The proposed model is tested on a real size network problem. The simulation results indicate that the proposed model performs well in the case of collecting real-time information.Keywords: disaster management, real-time demand, reinforcement learning, relief demand
Procedia PDF Downloads 31816815 Investigation on Machine Tools Energy Consumptions
Authors: Shiva Abdoli, Daniel T.Semere
Abstract:
Several researches have been conducted to study consumption of energy in cutting process. Most of these researches are focusing to measure the consumption and propose consumption reduction methods. In this work, the relation between the cutting parameters and the consumption is investigated in order to establish a generalized energy consumption model that can be used for process and production planning in real production lines. Using the generalized model, the process planning will be carried out by taking into account the energy as a function of the selected process parameters. Similarly, the generalized model can be used in production planning to select the right operational parameters like batch sizes, routing, buffer size, etc. in a production line. The description and derivation of the model as well as a case study are given in this paper to illustrate the applicability and validity of the model.Keywords: process parameters, cutting process, energy efficiency, Material Removal Rate (MRR)
Procedia PDF Downloads 49916814 Waste Derived from Refinery and Petrochemical Plants Activities: Processing of Oil Sludge through Thermal Desorption
Authors: Anna Bohers, Emília Hroncová, Juraj Ladomerský
Abstract:
Oil sludge with its main characteristic of high acidity is a waste product generated from the operation of refinery and petrochemical plants. Former refinery and petrochemical plant - Petrochema Dubová is present in Slovakia as well. Its activities was to process the crude oil through sulfonation and adsorption technology for production of lubricating and special oils, synthetic detergents and special white oils for cosmetic and medical purposes. Seventy years ago – period, when this historical acid sludge burden has been created – comparing to the environmental awareness the production was in preference. That is the reason why, as in many countries, also in Slovakia a historical environmental burden is present until now – 229 211 m3 of oil sludge in the middle of the National Park of Nízke Tatry mountain chain. Neither one of tried treatment methods – bio or non-biologic one - was proved as suitable for processing or for recovery in the reason of different factors admission: i.e. strong aggressivity, difficulty with handling because of its sludgy and liquid state et sim. As a potential solution, also incineration was tested, but it was not proven as a suitable method, as the concentration of SO2 in combustion gases was too high, and it was not possible to decrease it under the acceptable value of 2000 mg.mn-3. That is the reason why the operation of incineration plant has been terminated, and the acid sludge landfills are present until nowadays. The objective of this paper is to present a new possibility of processing and valorization of acid sludgy-waste. The processing of oil sludge was performed through the effective separation - thermal desorption technology, through which it is possible to split the sludgy material into the matrix (soil, sediments) and organic contaminants. In order to boost the efficiency in the processing of acid sludge through thermal desorption, the work will present the possibility of application of an original technology – Method of Blowing Decomposition for recovering of organic matter into technological lubricating oil.Keywords: hazardous waste, oil sludge, remediation, thermal desorption
Procedia PDF Downloads 20016813 Information Technology Service Management System Measurement Using ISO20000-1 and ISO15504-8
Authors: Imam Asrowardi, Septafiansyah Dwi Putra, Eko Subyantoro
Abstract:
Process assessments can improve IT service management system (IT SMS) processes but the assessment method is not always transparent. This paper outlines a project to develop a solution- mediated process assessment tool to enable transparent and objective SMS process assessment. Using the international standards for SMS and process assessment, the tool is being developed following the International standard approach in collaboration and evaluate by expert judgment from committee members and ITSM practitioners.Keywords: SMS, tools evaluation, ITIL, ISO service
Procedia PDF Downloads 48216812 Applying Simulation-Based Digital Teaching Plans and Designs in Operating Medical Equipment
Authors: Kuo-Kai Lin, Po-Lun Chang
Abstract:
Background: The Emergency Care Research Institute released a list for the top 10 medical technology hazards in 2017, with the following hazard topping the list: ‘infusion errors can be deadly if simple safety steps are overlooked.’ In addition, hospitals use various assessment items to evaluate the safety of their medical equipment, confirming the importance of medical equipment safety. In recent years, the topic of patient safety has garnered increasing attention. Accordingly, various agencies have established patient safety-related committees to coordinate, collect, and analyze information regarding abnormal events associated with medical practice. Activities to promote and improve employee training have been introduced to diminish the recurrence of medical malpractice. Objective: To allow nursing personnel to acquire the skills needed to operate common medical equipment and update and review such skills whenever necessary to elevate medical care quality and reduce patient injuries caused by medical equipment operation errors. Method: In this study, a quasi-experimental design was adopted and nurses from a regional teaching hospital were selected as the study sample. Online videos instructing the operation method of common medical equipment were made and quick response codes were designed for the nursing personnel to quickly access the videos when necessary. Senior nursing supervisors and equipment experts were invited to formulate a ‘Scale-based Questionnaire for Assessing Nursing Personnel’s Operational Knowledge of Common Medical Equipment’ to evaluate the nursing personnel’s literacy regarding the operation of the medical equipment. From March to October 2017, an employee training on medical equipment operation and a practice course (simulation course) were implemented, after which the effectiveness of the training and practice course were assessed. Results: Prior to and after the training and practice course, the 66 participating nurses scored 58 and 87 on ‘operational knowledge of common medical equipment,’ respectively (showing a significant statistical difference; t = -9.407, p < .001); 53.5 and 86.3 on ‘operational knowledge of 12-lead electrocardiography’ (z = -2.087, p < .01), respectively; 40 and 79.5 on ‘operational knowledge of cardiac defibrillators’ (z = -3.849, p < .001), respectively; 90 and 98 on ‘operational knowledge of Abbott pumps’ (z = -1.841, p = 0.066), respectively; and 8.7 and 13.7 on ‘perceived competence’ (showing a significant statistical difference; t = -2.77, p < .05). In the participating hospital, medical equipment operation errors were observed in both 2016 and 2017. However, since the implementation of the intervention, medical equipment operation errors have not yet been observed up to October 2017, which can be regarded as the secondary outcome of this study. Conclusion: In this study, innovative teaching strategies were adopted to effectively enhance the professional literacy and skills of nursing personnel in operating medical equipment. The training and practice course also elevated the nursing personnel’s related literacy and perceived competence of operating medical equipment. The nursing personnel was thus able to accurately operate the medical equipment and avoid operational errors that might jeopardize patient safety.Keywords: medical equipment, digital teaching plan, simulation-based teaching plan, operational knowledge, patient safety
Procedia PDF Downloads 13816811 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 16116810 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 8216809 Optimising the Reservoir Operation Using Water Resources Yield and Planning Model at Inanda Dam, uMngeni Basin
Authors: O. Nkwonta, B. Dzwairo, F. Otieno, J. Adeyemo
Abstract:
The effective management of water resources is of great importance to ensure the supply of water resources to support changing water requirements over a selected planning horizon and in a sustainable and cost-effective way. Essentially, the purpose of the water resources planning process is to balance the available water resources in a system with the water requirements and losses to which the system is subjected. In such situations, water resources yield and planning model can be used to solve those difficulties. It has an advantage over other models by managing model runs, developing a representative system network, modelling incremental sub-catchments, creating a variety of standard system features, special modelling features, and run result output options.Keywords: complex, water resources, planning, cost effective, management
Procedia PDF Downloads 45116808 A Performance Comparison between Conventional and Flexible Box Erecting Machines Using Dispatching Rules
Authors: Min Kyu Kim, Eun Young Lee, Dong Woo Son, Yoon Seok Chang
Abstract:
In this paper, we introduce a flexible box erecting machine (BEM) that swiftly and automatically transforms cardboard into a three dimensional box. Recently, the parcel service and home-shopping industries have grown rapidly, and there is an increasing need for various box types to ship various products. However, workers cannot fold thousands of boxes manually in a day. As such, automatic BEMs are garnering greater attention. This study takes equipment operation into consideration as well as mechanical improvements in order to design a BEM that is able to outperform its conventional counterparts. We analyzed six dispatching rules – First In First Out (FIFO), Shortest Processing Time (SPT), Earliest Due Date (EDD), Setup Avoidance, EDD + SPT, and EDD + Setup Avoidance – to determine which one was most suitable for BEM operation. Consequently, SPT and Setup Avoidance were found to be the most critical rules, followed by EDD + Setup Avoidance, EDD + SPT, EDD, and FIFO. This hierarchy was valid for both our conventional BEM and our new flexible BEM from the viewpoint of processing time. We believe that this research can contribute to flexible BEM management, which has the potential to increase productivity and convenience.Keywords: automation, box erecting machine, dispatching rule, setup time
Procedia PDF Downloads 36416807 Optimization and Design of Current-Mode Multiplier Circuits with Applications in Analog Signal Processing for Gas Industrial Package Systems
Authors: Mohamad Baqer Heidari, Hefzollah.Mohammadian
Abstract:
This brief presents two original implementations of improved accuracy current-mode multiplier/divider circuits. Besides the advantage of their simplicity, these original multiplier/divider structures present the advantage of very small linearity errors that can be obtained as a result of the proposed design techniques (0.75% and 0.9%, respectively, for an extended range of the input currents). The original multiplier/divider circuits permit a facile reconfiguration, the presented structures representing the functional basis for implementing complex function synthesizer circuits. The proposed computational structures are designed for implementing in 0.18-µm CMOS technology, with a low-voltage operation (a supply voltage of 1.2 V). The circuits’ power consumptions are 60 and 75 µW, respectively, while their frequency bandwidths are 79.6 and 59.7 MHz, respectively.Keywords: analog signal processing, current-mode operation, functional core, multiplier, reconfigurable circuits, industrial package systems
Procedia PDF Downloads 37516806 Improving the LDMOS Temperature Compensation Bias Circuit to Optimize Back-Off
Authors: Antonis Constantinides, Christos Yiallouras, Christakis Damianou
Abstract:
The application of today's semiconductor transistors in high power UHF DVB-T linear amplifiers has evolved significantly by utilizing LDMOS technology. This fact provides engineers with the option to design a single transistor signal amplifier which enables output power and linearity that was unobtainable previously using bipolar junction transistors or later type first generation MOSFETS. The quiescent current stability in terms of thermal variations of the LDMOS guarantees a robust operation in any topology of DVB-T signal amplifiers. Otherwise, progressively uncontrolled heat dissipation enhancement on the LDMOS case can degrade the amplifier’s crucial parameters in regards to the gain, linearity, and RF stability, resulting in dysfunctional operation or a total destruction of the unit. This paper presents one more sophisticated approach from the traditional biasing circuits used so far in LDMOS DVB-T amplifiers. It utilizes a microprocessor control technology, providing stability in topologies where IDQ must be perfectly accurate.Keywords: LDMOS, amplifier, back-off, bias circuit
Procedia PDF Downloads 34016805 Combustion Improvements by C4/C5 Bio-Alcohol Isomer Blended Fuels Combined with Supercharging and EGR in a Diesel Engine
Authors: Yasufumi Yoshimoto, Enkhjargal Tserenochir, Eiji Kinoshita, Takeshi Otaka
Abstract:
Next generation bio-alcohols produced from non-food based sources like cellulosic biomass are promising renewable energy sources. The present study investigates engine performance, combustion characteristics, and emissions of a small single cylinder direct injection diesel engine fueled by four kinds of next generation bio-alcohol isomer and diesel fuel blends with a constant blending ratio of 3:7 (mass). The tested bio-alcohol isomers here are n-butanol and iso-butanol (C4 alcohol), and n-pentanol and iso-pentanol (C5 alcohol). To obtain simultaneous reductions in NOx and smoke emissions, the experiments employed supercharging combined with EGR (Exhaust Gas Recirculation). The boost pressures were fixed at two conditions, 100 kPa (naturally aspirated operation) and 120 kPa (supercharged operation) provided with a roots blower type supercharger. The EGR rates were varied from 0 to 25% using a cooled EGR technique. The results showed that both with and without supercharging, all the bio-alcohol blended diesel fuels improved the trade-off relation between NOx and smoke emissions at all EGR rates while maintaining good engine performance, when compared with diesel fuel operation. It was also found that regardless of boost pressure and EGR rate, the ignition delays of the tested bio-alcohol isomer blends are in the order of iso-butanol > n-butanol > iso-pentanol > n-pentanol. Overall, it was concluded that, except for the changes in the ignition delays the influence of bio-alcohol isomer blends on the engine performance, combustion characteristics, and emissions are relatively small.Keywords: alternative fuel, butanol, diesel engine, EGR (Exhaust Gas Recirculation), next generation bio-alcohol isomer blended fuel, pentanol, supercharging
Procedia PDF Downloads 16916804 Automated Driving Deep Neural Networks Model Accuracy and Performance Assessment in a Simulated Environment
Authors: David Tena-Gago, Jose M. Alcaraz Calero, Qi Wang
Abstract:
The evolution and integration of automated vehicles have become more and more tangible in recent years. State-of-the-art technological advances in the field of camera-based Artificial Intelligence (AI) and computer vision greatly favor the performance and reliability of the Advanced Driver Assistance System (ADAS), leading to a greater knowledge of vehicular operation and resembling human behavior. However, the exclusive use of this technology still seems insufficient to control vehicular operation at 100%. To reveal the degree of accuracy of the current camera-based automated driving AI modules, this paper studies the structure and behavior of one of the main solutions in a controlled testing environment. The results obtained clearly outline the lack of reliability when using exclusively the AI model in the perception stage, thereby entailing using additional complementary sensors to improve its safety and performance.Keywords: accuracy assessment, AI-driven mobility, artificial intelligence, automated vehicles
Procedia PDF Downloads 11516803 Optimizing Performance of Tablet's Direct Compression Process Using Fuzzy Goal Programming
Authors: Abbas Al-Refaie
Abstract:
This paper aims at improving the performance of the tableting process using statistical quality control and fuzzy goal programming. The tableting process was studied. Statistical control tools were used to characterize the existing process for three critical responses including the averages of a tablet’s weight, hardness, and thickness. At initial process factor settings, the estimated process capability index values for the tablet’s averages of weight, hardness, and thickness were 0.58, 3.36, and 0.88, respectively. The L9 array was utilized to provide experimentation design. Fuzzy goal programming was then employed to find the combination of optimal factor settings. Optimization results showed that the process capability index values for a tablet’s averages of weight, hardness, and thickness were improved to 1.03, 4.42, and 1.42, respectively. Such improvements resulted in significant savings in quality and production costs.Keywords: fuzzy goal programming, control charts, process capability, tablet optimization
Procedia PDF Downloads 27116802 Linearization and Process Standardization of Construction Design Engineering Workflows
Authors: T. R. Sreeram, S. Natarajan, C. Jena
Abstract:
Civil engineering construction is a network of tasks involving varying degree of complexity and streamlining, and standardization is the only way to establish a systemic approach to design. While there are off the shelf tools such as AutoCAD that play a role in the realization of design, the repeatable process in which these tools are deployed often is ignored. The present paper addresses this challenge through a sustainable design process and effective standardizations at all stages in the design workflow. The same is demonstrated through a case study in the context of construction, and further improvement points are highlighted.Keywords: syste, lean, value stream, process improvement
Procedia PDF Downloads 12316801 Measuring Oxygen Transfer Coefficients in Multiphase Bioprocesses: The Challenges and the Solution
Authors: Peter G. Hollis, Kim G. Clarke
Abstract:
Accurate quantification of the overall volumetric oxygen transfer coefficient (KLa) is ubiquitously measured in bioprocesses by analysing the response of dissolved oxygen (DO) to a step change in the oxygen partial pressure in the sparge gas using a DO probe. Typically, the response lag (τ) of the probe has been ignored in the calculation of KLa when τ is less than the reciprocal KLa, failing which a constant τ has invariably been assumed. These conventions have now been reassessed in the context of multiphase bioprocesses, such as a hydrocarbon-based system. Here, significant variation of τ in response to changes in process conditions has been documented. Experiments were conducted in a 5 L baffled stirred tank bioreactor (New Brunswick) in a simulated hydrocarbon-based bioprocess comprising a C14-20 alkane-aqueous dispersion with suspended non-viable Saccharomyces cerevisiae solids. DO was measured with a polarographic DO probe fitted with a Teflon membrane (Mettler Toledo). The DO concentration response to a step change in the sparge gas oxygen partial pressure was recorded, from which KLa was calculated using a first order model (without incorporation of τ) and a second order model (incorporating τ). τ was determined as the time taken to reach 63.2% of the saturation DO after the probe was transferred from a nitrogen saturated vessel to an oxygen saturated bioreactor and is represented as the inverse of the probe constant (KP). The relative effects of the process parameters on KP were quantified using a central composite design with factor levels typical of hydrocarbon bioprocesses, namely 1-10 g/L yeast, 2-20 vol% alkane and 450-1000 rpm. A response surface was fitted to the empirical data, while ANOVA was used to determine the significance of the effects with a 95% confidence interval. KP varied with changes in the system parameters with the impact of solid loading statistically significant at the 95% confidence level. Increased solid loading reduced KP consistently, an effect which was magnified at high alkane concentrations, with a minimum KP of 0.024 s-1 observed at the highest solids loading of 10 g/L. This KP was 2.8 fold lower that the maximum of 0.0661 s-1 recorded at 1 g/L solids, demonstrating a substantial increase in τ from 15.1 s to 41.6 s as a result of differing process conditions. Importantly, exclusion of KP in the calculation of KLa was shown to under-predict KLa for all process conditions, with an error up to 50% at the highest KLa values. Accurate quantification of KLa, and therefore KP, has far-reaching impact on industrial bioprocesses to ensure these systems are not transport limited during scale-up and operation. This study has shown the incorporation of τ to be essential to ensure KLa measurement accuracy in multiphase bioprocesses. Moreover, since τ has been conclusively shown to vary significantly with process conditions, it has also been shown that it is essential for τ to be determined individually for each set of process conditions.Keywords: effect of process conditions, measuring oxygen transfer coefficients, multiphase bioprocesses, oxygen probe response lag
Procedia PDF Downloads 26616800 Dual Set Point Governor Control Structure with Common Optimum Temporary Droop Settings for both Islanded and Grid Connected Modes
Authors: Deepen Sharma, Eugene F. Hill
Abstract:
For nearly 100 years, hydro-turbine governors have operated with only a frequency set point. This natural governor action means that the governor responds with changing megawatt output to disturbances in system frequency. More and more, power system managers are demanding that governors operate with constant megawatt output. One way of doing this is to introduce a second set point in the control structure called a power set point. The control structure investigated and analyzed in this paper is unique in the way that it utilizes a power reference set point in addition to the conventional frequency reference set point. An optimum set of temporary droop parameters derived based on the turbine-generator inertia constant and the penstock water start time for stable islanded operation are shown to be also equally applicable for a satisfactory rate of generator loading during its grid connected mode. A theoretical development shows why this is the case. The performance of the control structure has been investigated and established based on the simulation study made in MATLAB/Simulink as well as through testing the real time controller performance on a 15 MW Kaplan Turbine and generator. Recordings have been made using the labVIEW data acquisition platform. The hydro-turbine governor control structure and its performance investigated in this paper thus eliminates the need to have a separate set of temporary droop parameters, one valid for islanded mode and the other for interconnected operations mode.Keywords: frequency set point, hydro governor, interconnected operation, isolated operation, power set point
Procedia PDF Downloads 36716799 Modeling and Power Control of DFIG Used in Wind Energy System
Authors: Nadia Ben Si Ali, Nadia Benalia, Nora Zerzouri
Abstract:
Wind energy generation has attracted great interests in recent years. Doubly Fed Induction Generator (DFIG) for wind turbines are largely deployed because variable-speed wind turbines have many advantages over fixed-speed generation such as increased energy capture, operation at maximum power point, improved efficiency, and power quality. This paper presents the operation and vector control of a Doubly-fed Induction Generator (DFIG) system where the stator is connected directly to a stiff grid and the rotor is connected to the grid through bidirectional back-to-back AC-DC-AC converter. The basic operational characteristics, mathematical model of the aerodynamic system and vector control technique which is used to obtain decoupled control of powers are investigated using the software Mathlab/Simulink.Keywords: wind turbine, Doubly Fed Induction Generator, wind speed controller, power system stability
Procedia PDF Downloads 37916798 Effect of Nitrogen Gaseous Plasma on Cotton Fabric Dyed with Reactive Yellow105
Authors: Mohammad Mirjalili, Hamid Akbarpour
Abstract:
In this work, a bleached well cotton sample was dyed with reactive yellow105 dye and subsequently, the dyed sample was exposed to the plasma condition containing Nitrogen gas at 1 and 5 minutes of plasma exposure time, respectively. The effect of plasma on surface morphology fabric was studied by Scanning Electronic Microscope (SEM). CIELab, K/S, and %R of samples (treated and untreated samples) were measured by a reflective spectrophotometer, and consequently, the experiments show that the sample dyed with Reactive yellow 105 after being washed, with the increase in the operation time of plasma, its dye fastness decreases. In addition, the increase in plasma operation time at constant pressure would increase the destructing effect on the surface morphology of samples dyed with reactive yellow105.Keywords: cotton fabric, nitrogen cold plasma, reflective spectrophotometer, scanning electronic microscope (SEM), reactive yellow105 dye
Procedia PDF Downloads 257