Search results for: graph operation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3208

Search results for: graph operation

958 Experimental Study of Al₂O₃ and SiC Nano Particles on Tensile Strength of Al 1100 Sheet Produced by Accumulative Press Bonding Process

Authors: M. Zadshakoyan, H. Marassem Bonab, P. M. Keshtiban

Abstract:

The SPD process widely used to optimize microstructure, strength and mechanical properties of the metals. Processes such as ARB and APB could have a considerable impact on improving the properties of metals. The aluminum material after steel, known as the most used metal, Because of its low strength, there are restrictions on the use of this metal, it is required to spread further studies to increase strength and improve the mechanical properties of this light weight metal. In this study, Annealed aluminum material, with yield strength of 85 MPa and tensile strength of 124 MPa, sliced into 2 sheets with dimensions of 30 and 25 mm and the thickness of 1.5 mm. then the sheets press bonded under 6 cycles, which increased the ultimate strength to 281 MPa. In addition, by adding 0.1%Wt of SiC particles to interface of the sheets, the sheets press bonded by 6 cycles to achieve a homogeneous composite. The same operation using Al2O3 particles and a mixture of SiC+Al2O3 particles was repeated and the amount of strength and elongation of produced composites compared with each other and with pure 6 cycle press bonded Aluminum. The results indicated that the ultimate strength of Al/SiC composite was 2.6 times greater than Annealed aluminum. And Al/Al2O3 and Al/Al2O3+SiC samples were low strength than Al/SiC sample. The pure 6 time press bonded Aluminum had lowest strength by 2.2 times greater than annealed aluminum. Strength of aluminum was increased by making the metal matrix composite. Also, it was found that the hardness of pure Aluminum increased 1.7 times after 6 cycles of APB process, hardness of the composite samples improved further, so that, the hardness of Al/SiC increased up to 2.51 times greater than annealed aluminum.

Keywords: APB, nano composite, nano particles, severe plastic deformation

Procedia PDF Downloads 277
957 Investigating the Effect of the Shape of the Side Supports of the Gates of the Gotvand Reservoir Dam (from the Peak Overflows) on the Narrowing Coefficients

Authors: M. Abbasi

Abstract:

A spillway structure is used to pass excess water and floods from upstream or upstream to downstream or tributary. The spillway is considered one of the most key members of the dam, and the failure of many dams is attributed to the inefficiency of their spillway. Weirs should be selected as strong, reliable and high-performance structures, and weirs should be ready for use in all conditions and able to drain the flood so that we do not witness many casualties and financial losses when a flood occurs. The purpose of this study is to simulate the flow pattern passing over the peak spillway in order to optimize and adjust the height of the spillway walls. In this research, the effect of the shape of the side wings on the flow pattern over the peak spillways of the Gotvand reservoir dam was simulated and modelled using Flow3D software. In this research, side wings with rounded walls with six different approach angles were used. In addition, the different value of H/Hd was used to check the effect of the tank head. The results showed that with the constant H/Hd ratio and the increase of the approach angle of the side wing, the flow depth first decreases and then increases. These changes were the opposite regarding the depth average speed of the flow and the depth average concentration of the air entering the flow. At the same time, with the constant angle of approach of the side wing and with the increase of H/Hd ratio, the flow depth increases. In general, a correct understanding of the operation of overflows and a correct design can significantly reduce construction costs and solve flooding problems.

Keywords: effect of the shape, gotvand reservoir dam, narrowing coefficients, supports of the gates

Procedia PDF Downloads 46
956 A Molecular Dynamics Study on Intermittent Plasticity and Dislocation Avalanche Emissions in FCC and BCC Crystals

Authors: Javier Varillas, Jorge Alcalá

Abstract:

We investigate dislocation avalanche phenomena in face-centered cubic (FCC) and body-centered cubic (BCC) crystals using massive, large-scale molecular dynamics (MD) simulations. The analysis is focused on the intermittent development of dense dislocation arrangements subjected to uniaxial tensile straining under displacement control. We employ a novel computational scheme that allows us to inject an entangled dislocation structure in periodic MD domains. We assess the emission of plastic bursts (or dislocation avalanches) in terms of the sharp stress drops detected in the stress-strain curve. The plastic activity corresponds to the sporadic operation of specific dislocation glide processes exhibiting quiescent periods between successive avalanche events. We find that the plastic intermittences in our simulations do not overlap in time under sufficiently low strain rates as dissipation operates faster than driving, where the dense dislocation networks evolve through the emission of dislocation avalanche events whose carried slip adheres to self-organized power-law distributions. These findings enable the extension of the slip distributions obtained from strict displacement-controlled micropillar compression experiments towards smaller values of slip size. Our results furnish further understanding upon the development of entangled dislocation networks in metal plasticity, including specific mechanisms of dislocation propagation and annihilation, along with the evolution of specific dislocation populations through dislocation density analyses.

Keywords: dislocations, intermittent plasticity, molecular dynamics, slip distributions

Procedia PDF Downloads 119
955 Bluetooth Communication Protocol Study for Multi-Sensor Applications

Authors: Joao Garretto, R. J. Yarwood, Vamsi Borra, Frank Li

Abstract:

Bluetooth Low Energy (BLE) has emerged as one of the main wireless communication technologies used in low-power electronics, such as wearables, beacons, and Internet of Things (IoT) devices. BLE’s energy efficiency characteristic, smart mobiles interoperability, and Over the Air (OTA) capabilities are essential features for ultralow-power devices, which are usually designed with size and cost constraints. Most current research regarding the power analysis of BLE devices focuses on the theoretical aspects of the advertising and scanning cycles, with most results being presented in the form of mathematical models and computer software simulations. Such computer modeling and simulations are important for the comprehension of the technology, but hardware measurement is essential for the understanding of how BLE devices behave in real operation. In addition, recent literature focuses mostly on the BLE technology, leaving possible applications and its analysis out of scope. In this paper, a coin cell battery-powered BLE Data Acquisition Device, with a 4-in-1 sensor and one accelerometer, is proposed and evaluated with respect to its Power Consumption. First, evaluations of the device in advertising mode with the sensors turned off completely, followed by the power analysis when each of the sensors is individually turned on and data is being transmitted, and concluding with the power consumption evaluation when both sensors are on and respectively broadcasting the data to a mobile phone. The results presented in this paper are real-time measurements of the electrical current consumption of the BLE device, where the energy levels that are demonstrated are matched to the BLE behavior and sensor activity.

Keywords: bluetooth low energy, power analysis, BLE advertising cycle, wireless sensor node

Procedia PDF Downloads 73
954 Embolization of Spinal Dural Arteriovenous Fistulae: Clinical Outcomes and Long-Term Follow-Up: A Multicenter Study

Authors: Walid Abouzeid, Mohamed Shadad, Mostafa Farid, Magdy El Hawary

Abstract:

The most frequent treatable vascular abnormality of the spinal canal is spinal dural arteriovenous fistulae (SDAVFs), which cause progressive para- or quadriplegia mostly affecting elderly males. SDAVFs are present in the thoracolumbar region. The main goal of treatment must be to obliterate the shunting zone via superselective embolization with the usage of a liquid embolic agent. This study aims to evaluate endovascular technique as a safe and efficient approach for the treatment SDAVFs, especially with long-term follow-up clinical outcomes. Study Design: A retrospective clinical case study. From May 2010 to May 2017, 15 patients who had symptoms attributed to SDAVFs underwent the operation in the Departments of Neurosurgery in Suhag, Tanta, and Al-Azhar Universities and Interventional Radiology, Ain Shams University. All the patients had varying degrees of progressive spastic paraparesis with and without sphincteric disturbances. Endovascular embolization was used in all cases. Fourteen were males, with ages ranging from 45 to 74 years old. After the treatment, good outcome was found in five patients (33.3%), a moderate outcome was delineated in six patients (40 %), and four patients revealed a poor outcome (26.7%). Spinal AVF could be treated safely and effectively by the endovascular approach. Generally, there is no correlation between the disappearance of MRI abnormalities and significant clinical improvement. The preclinical state of the patient is directly proportional to the clinical outcome. Due to unexpected responses, embolization should be attempted even the patient is in a bad clinical condition.

Keywords: spine, arteriovenous, fistula, endovascular, embolization

Procedia PDF Downloads 90
953 Residual Lifetime Estimation for Weibull Distribution by Fusing Expert Judgements and Censored Data

Authors: Xiang Jia, Zhijun Cheng

Abstract:

The residual lifetime of a product is the operation time between the current time and the time point when the failure happens. The residual lifetime estimation is rather important in reliability analysis. To predict the residual lifetime, it is necessary to assume or verify a particular distribution that the lifetime of the product follows. And the two-parameter Weibull distribution is frequently adopted to describe the lifetime in reliability engineering. Due to the time constraint and cost reduction, a life testing experiment is usually terminated before all the units have failed. Then the censored data is usually collected. In addition, other information could also be obtained for reliability analysis. The expert judgements are considered as it is common that the experts could present some useful information concerning the reliability. Therefore, the residual lifetime is estimated for Weibull distribution by fusing the censored data and expert judgements in this paper. First, the closed-forms concerning the point estimate and confidence interval for the residual lifetime under the Weibull distribution are both presented. Next, the expert judgements are regarded as the prior information and how to determine the prior distribution of Weibull parameters is developed. For completeness, the cases that there is only one, and there are more than two expert judgements are both focused on. Further, the posterior distribution of Weibull parameters is derived. Considering that it is difficult to derive the posterior distribution of residual lifetime, a sample-based method is proposed to generate the posterior samples of Weibull parameters based on the Monte Carlo Markov Chain (MCMC) method. And these samples are used to obtain the Bayes estimation and credible interval for the residual lifetime. Finally, an illustrative example is discussed to show the application. It demonstrates that the proposed method is rather simple, satisfactory, and robust.

Keywords: expert judgements, information fusion, residual lifetime, Weibull distribution

Procedia PDF Downloads 123
952 Quality Approaches for Mass-Produced Fashion: A Study in Malaysian Garment Manufacturing

Authors: N. J. M. Yusof, T. Sabir, J. McLoughlin

Abstract:

Garment manufacturing industry involves sequential processes that are subjected to uncontrollable variations. The industry depends on the skill of labour in handling the varieties of fabrics and accessories, machines, and also a complicated sewing operation. Due to these reasons, garment manufacturers created systems to monitor and control the product’s quality regularly by conducting quality approaches to minimize variation. The aims of this research were to ascertain the quality approaches deployed by Malaysian garment manufacturers in three key areas-quality systems and tools; quality control and types of inspection; sampling procedures chosen for garment inspection. The focus of this research also aimed to distinguish quality approaches used by companies that supplied the finished garments to both domestic and international markets. The feedback from each of company’s representatives was obtained using the online survey, which comprised of five sections and 44 questions on the organizational profile and quality approaches used in the garment industry. The results revealed that almost all companies had established their own mechanism of process control by conducting a series of quality inspection for daily production either it was formally been set up or vice versa. Quality inspection was the predominant quality control activity in the garment manufacturing and the level of complexity of these activities was substantially dictated by the customers. AQL-based sampling was utilized by companies dealing with the export market, whilst almost all the companies that only concentrated on the domestic market were comfortable using their own sampling procedures for garment inspection. This research provides an insight into the implementation of quality approaches that were perceived as important and useful in the garment manufacturing sector, which is truly labour-intensive.

Keywords: garment manufacturing, quality approaches, quality control, inspection, Acceptance Quality Limit (AQL), sampling

Procedia PDF Downloads 421
951 Experimental Study of Particle Deposition on Leading Edge of Turbine Blade

Authors: Yang Xiao-Jun, Yu Tian-Hao, Hu Ying-Qi

Abstract:

Breathing in foreign objects during the operation of the aircraft engine, impurities in the aircraft fuel and products of incomplete combustion can produce deposits on the surface of the turbine blades. These deposits reduce not only the turbine's operating efficiency but also the life of the turbine blades. Based on the small open wind tunnel, the simulation of deposits on the leading edge of the turbine has been carried out in this work. The effect of film cooling on particulate deposition was investigated. Based on the analysis, the adhesive mechanism for the molten pollutants’ reaching to the turbine surface was simulated by matching the Stokes number, TSP (a dimensionless number characterizing particle phase transition) and Biot number of the test facility and that of the real engine. The thickness distribution and growth trend of the deposits have been observed by high power microscope and infrared camera under different temperature of the main flow, the solidification temperature of the particulate objects, and the blowing ratio. The experimental results from the leading edge particulate deposition demonstrate that the thickness of the deposition increases with time until a quasi-stable thickness is reached, showing a striking effect of the blowing ratio on the deposition. Under different blowing ratios, there exists a large difference in the thickness distribution of the deposition, and the deposition is minimal at the specific blow ratio. In addition, the temperature of main flow and the solidification temperature of the particulate have a great influence on the deposition.

Keywords: deposition, experiment, film cooling, leading edge, paraffin particles

Procedia PDF Downloads 129
950 Retrofitting Residential Buildings for Energy Efficiency: An Experimental Investigation

Authors: Naseer M. A.

Abstract:

Buildings are major consumers of energy in both their construction and operation. They account for 40% of World’s energy use. It is estimated that 40-60% of this goes for conditioning the indoor environment. In India, like many other countries, the residential buildings have a major share (more than 50%) in the building sector. Of these, single-family units take a mammoth share. The single-family dwelling units in the urban and fringe areas are built in two stories to minimize the building foot print on small land parcels. And quite often, the bedrooms are located in the first floors. The modern buildings are provided with reinforced concrete (RC) roofs that absorb heat throughout the day and radiate the heat into the interiors during the night. The rooms that are occupied in the night, like bedrooms, are having their indoors uncomfortable. This has resulted in the use of active systems like air-conditioners and air coolers, thereby increasing the energy use. An investigation conducted by monitoring the thermal comfort condition in the residential building with RC roofs have proved that the indoors are really uncomfortable in the night hours. A sustainable solution to improve the thermal performance of the RC roofs was developed by an experimental study by continuously monitoring the thermal comfort parameters during summer (the period that is most uncomfortable in temperate climate). The study conducted in the southern peninsular India, prove that retrofitting of existing residential building can give a sustainable solution in abating the ever increasing energy demand especially when it is a fact that these residential buildings that are built for a normal life span of 40 years would continue to consume the energy for the rest of its useful life.

Keywords: energy efficiency, thermal comfort, retrofitting, residential buildings

Procedia PDF Downloads 232
949 Mathematical Modelling and AI-Based Degradation Analysis of the Second-Life Lithium-Ion Battery Packs for Stationary Applications

Authors: Farhad Salek, Shahaboddin Resalati

Abstract:

The production of electric vehicles (EVs) featuring lithium-ion battery technology has substantially escalated over the past decade, demonstrating a steady and persistent upward trajectory. The imminent retirement of electric vehicle (EV) batteries after approximately eight years underscores the critical need for their redirection towards recycling, a task complicated by the current inadequacy of recycling infrastructures globally. A potential solution for such concerns involves extending the operational lifespan of electric vehicle (EV) batteries through their utilization in stationary energy storage systems during secondary applications. Such adoptions, however, require addressing the safety concerns associated with batteries’ knee points and thermal runaways. This paper develops an accurate mathematical model representative of the second-life battery packs from a cell-to-pack scale using an equivalent circuit model (ECM) methodology. Neural network algorithms are employed to forecast the degradation parameters based on the EV batteries' aging history to develop a degradation model. The degradation model is integrated with the ECM to reflect the impacts of the cycle aging mechanism on battery parameters during operation. The developed model is tested under real-life load profiles to evaluate the life span of the batteries in various operating conditions. The methodology and the algorithms introduced in this paper can be considered the basis for Battery Management System (BMS) design and techno-economic analysis of such technologies.

Keywords: second life battery, electric vehicles, degradation, neural network

Procedia PDF Downloads 36
948 Efficiency and Scale Elasticity in Network Data Envelopment Analysis: An Application to International Tourist Hotels in Taiwan

Authors: Li-Hsueh Chen

Abstract:

Efficient operation is more and more important for managers of hotels. Unlike the manufacturing industry, hotels cannot store their products. In addition, many hotels provide room service, and food and beverage service simultaneously. When efficiencies of hotels are evaluated, the internal structure should be considered. Hence, based on the operational characteristics of hotels, this study proposes a DEA model to simultaneously assess the efficiencies among the room production division, food and beverage production division, room service division and food and beverage service division. However, not only the enhancement of efficiency but also the adjustment of scale can improve the performance. In terms of the adjustment of scale, scale elasticity or returns to scale can help to managers to make decisions concerning expansion or contraction. In order to construct a reasonable approach to measure the efficiencies and scale elasticities of hotels, this study builds an alternative variable-returns-to-scale-based two-stage network DEA model with the combination of parallel and series structures to explore the scale elasticities of the whole system, room production division, food and beverage production division, room service division and food and beverage service division based on the data of international tourist hotel industry in Taiwan. The results may provide valuable information on operational performance and scale for managers and decision makers.

Keywords: efficiency, scale elasticity, network data envelopment analysis, international tourist hotel

Procedia PDF Downloads 210
947 Assessing the Actions of the Farm Mangers to Execute Field Operations at Opportune Times

Authors: G. Edwards, N. Dybro, L. J. Munkholm, C. G. Sørensen

Abstract:

Planning agricultural operations requires an understanding of when fields are ready for operations. However determining a field’s readiness is a difficult process that can involve large amounts of data and an experienced farm manager. A consequence of this is that operations are often executed when fields are unready, or partially unready, which can compromise results incurring environmental impacts, decreased yield and increased operational costs. In order to assess timeliness of operations’ execution, a new scheme is introduced to quantify the aptitude of farm managers to plan operations. Two criteria are presented by which the execution of operations can be evaluated as to their exploitation of a field’s readiness window. A dataset containing the execution dates of spring and autumn operations on 93 fields in Iowa, USA, over two years, was considered as an example and used to demonstrate how operations’ executions can be evaluated. The execution dates were compared with simulated data to gain a measure of how disparate the actual execution was from the ideal execution. The presented tool is able to evaluate the spring operations better than the autumn operations as required data was lacking to correctly parameterise the crop model. Further work is needed on the underlying models of the decision support tool in order for its situational knowledge to emulate reality more consistently. However the assessment methods and evaluation criteria presented offer a standard by which operations' execution proficiency can be quantified and could be used to identify farm managers who require decisional support when planning operations, or as a means of incentivising and promoting the use of sustainable farming practices.

Keywords: operation management, field readiness, sustainable farming, workability

Procedia PDF Downloads 370
946 The Femoral Eversion Endarterectomy Technique with Transection: Safety and Efficacy

Authors: Hansraj Riteesh Bookun, Emily Maree Stevens, Jarryd Leigh Solomon, Anthony Chan

Abstract:

Objective: This was a retrospective cross-sectional study evaluating the safety and efficacy of femoral endarterectomy using the eversion technique with transection as opposed to the conventional endarterectomy technique with either vein or synthetic patch arterioplasty. Methods: Between 2010 to mid 2017, 19 patients with mean age of 75.4 years, underwent eversion femoral endarterectomy with transection by a single surgeon. There were 13 males (68.4%), and the comorbid burden was as follows: ischaemic heart disease (53.3%), diabetes (43.8%), stage 4 kidney impairment (13.3%) and current or ex-smoking (73.3%). The indications were claudication (45.5%), rest pain (18.2%) and tissue loss (36.3%). Results: The technical success rate was 100%. One patient required a blood transfusion following bleeding from intraoperative losses. Two patients required blood transfusions from low post operative haemogloblin concentrations – one of them in the context of myelodysplastic syndrome. There were no unexpected returns to theatre. The mean length of stay was 11.5 days with two patients having inpatient stays of 36 and 50 days respectively due to the need for rehabilitation. There was one death unrelated to the operation. Conclusion: The eversion technique with transection is safe and effective with low complication rates and a normally expected length of stay. It poses the advantage of not requiring a synthetic patch. This technique features minimal extraneous dissection as there is no need to harvest vein for a patch. Additionally, future endovascular interventions can be performed by puncturing the native vessel. There is no change to the femoral bifurcation anatomy after this technique. We posit that this is a useful adjunct to the surgeon’s panoply of vascular surgical techniques.

Keywords: endarterectomy, eversion, femoral, vascular

Procedia PDF Downloads 177
945 Exploring Polar Syntactic Effects of Verbal Extensions in Basà Language

Authors: Imoh Philip

Abstract:

This work investigates four verbal extensions; two in each set resulting in two opposite effects of the valency of verbs in Basà language. Basà language is an indigenous language spoken in Kogi, Nasarawa, Benue, Niger states and all the Federal Capital Territory (FCT) councils. Crozier & Blench (1992) and Blench & Williamson (1988) classify Basà as belonging to Proto–Kru, under the sub-phylum Western –Kru. It studies the effects of such morphosyntactic operations in Basà language with special focus on ‘reflexives’ ‘reciprocals’ versus ‘causativization’ and ‘applicativization’ both sets are characterized by polar syntactic processes of either decreasing or increasing the verb’s valency by one argument vis-à-vis the basic number of arguments, but by the similar morphological processes. In addition to my native intuitions as a native speaker of Basà language, data elicited for this work include discourse observation, staged and elicited spoken data from fluent native speakers. The paper argues that affixes attached to the verb root, result in either deriving an intransitive verb from a transitive one or a transitive verb from a bi/ditransitive verb and equally increase the verb’s valence deriving either a bitransitive verb from a transitive verb or a transitive verb from a intransitive one. Where the operation increases the verb’s valency, it triggers a transformation of arguments in the derived structure. In this case, the applied arguments displace the inherent ones. This investigation can stimulate further study on other transformations that are either syntactic or morphosyntactic in Basà and can also be replicated in other African and non-African languages.

Keywords: verbal extension, valency, reflexive, reciprocal, causativization, applicativization, Basà

Procedia PDF Downloads 187
944 Scheduling Method for Electric Heater in HEMS considering User’s Comfort

Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo, Jin-O Kim

Abstract:

Home Energy Management System (HEMS) which makes the residential consumers contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it also represents impacts of the comfort level on the scheduling result.

Keywords: load scheduling, usage pattern, user’s comfort, copula function, branch and bound, electric heater

Procedia PDF Downloads 561
943 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 265
942 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science

Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier

Abstract:

Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and compared

Keywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis

Procedia PDF Downloads 87
941 Study of Pipes Scaling of Purified Wastewater Intended for the Irrigation of Agadir Golf Grass

Authors: A. Driouiche, S. Mohareb, A. Hadfi

Abstract:

In Morocco’s Agadir region, the reuse of treated wastewater for irrigation of green spaces has faced the problem of scaling of the pipes of these waters. This research paper aims at studying the phenomenon of scaling caused by the treated wastewater from the Mzar sewage treatment plant. These waters are used in the irrigation of golf turf for the Ocean Golf Resort. Ocean Golf, located about 10 km from the center of the city of Agadir, is one of the most important recreation centers in Morocco. The course is a Belt Collins design with 27 holes, and is quite open with deep challenging bunkers. The formation of solid deposits in the irrigation systems has led to a decrease in their lifetime and, consequently, a loss of load and performance. Thus, the sprinklers used in golf turf irrigation are plugged in the first weeks of operation. To study this phenomenon, the wastewater used for the irrigation of the golf turf was taken and analyzed at various points, and also samples of scale formed in the circuits of the passage of these waters were characterized. This characterization of the scale was performed by X-ray fluorescence spectrometry, X-ray diffraction (XRD), thermogravimetric analysis (TGA), differential thermal analysis (DTA), and scanning electron microscopy (SEM). The results of the physicochemical analysis of the waters show that they are full of bicarbonates (653 mg/L), chloride (478 mg/L), nitrate (412 mg/L), sodium (425 mg/L) and calcium (199mg/L). Their pH is slightly alkaline. The analysis of the scale reveals that it is rich in calcium and phosphorus. It is formed of calcium carbonate (CaCO₃), silica (SiO₂), calcium silicate (Ca₂SiO₄), hydroxylapatite (Ca₁₀P₆O₂₆), calcium carbonate and phosphate (Ca₁₀(PO₄) 6CO₃) and silicate calcium and magnesium (Ca₅MgSi₃O₁₂).

Keywords: Agadir, irrigation, scaling water, wastewater

Procedia PDF Downloads 103
940 Design and Development of Power Sources for Plasma Actuators to Control Flow Separation

Authors: Himanshu J. Bahirat, Apoorva S. Janawlekar

Abstract:

Plasma actuators are essential for aerodynamic flow separation control due to their lack of mechanical parts, lightweight, and high response frequency, which have numerous applications in hypersonic or supersonic aircraft. The working of these actuators is based on the formation of a low-temperature plasma between a pair of parallel electrodes by the application of a high-voltage AC signal across the electrodes, after which air molecules from the air surrounding the electrodes are ionized and accelerated through the electric field. The high-frequency operation is required in dielectric discharge barriers to ensure plasma stability. To carry out flow separation control in a hypersonic flow, the optimal design and construction of a power supply to generate dielectric barrier discharges is carried out in this paper. In this paper, it is aspired to construct a simplified circuit topology to emulate the dielectric barrier discharge and study its various frequency responses. The power supply can generate high voltage pulses up to 20kV at the repetitive frequency range of 20-50kHz with an input power of 500W. The power supply has been designed to be short circuit proof and can endure variable plasma load conditions. Its general outline is to charge a capacitor through a half-bridge converter and then later discharge it through a step-up transformer at a high frequency in order to generate high voltage pulses. After simulating the circuit, the PCB design and, eventually, lab tests are carried out to study its effectiveness in controlling flow separation.

Keywords: aircraft propulsion, dielectric barrier discharge, flow separation control, power source

Procedia PDF Downloads 104
939 Cross Professional Team-Assisted Teaching Effectiveness

Authors: Shan-Yu Hsu, Hsin-Shu Huang

Abstract:

The main purpose of this teaching research is to design an interdisciplinary team-assisted teaching method for trainees and interns and review the effectiveness of this teaching method on trainees' understanding of peritoneal dialysis. The teaching research object is the fifth and sixth-grade trainees in a medical center's medical school. The teaching methods include media teaching, demonstration of technical operation, face-to-face communication with patients, special case discussions, and field visits to the peritoneal dialysis room. Evaluate learning effectiveness before, after, and verbally. Statistical analysis was performed using the SPSS paired-sample t-test to analyze whether there is a difference in peritoneal dialysis professional cognition before and after teaching intervention. Descriptive statistics show that the average score of the previous test is 74.44, the standard deviation is 9.34, the average score of the post-test is 95.56, and the standard deviation is 5.06. The results of the t-test of the paired samples are shown as p-value = 0.006, showing the peritoneal dialysis professional cognitive test. Significant differences were observed before and after. The interdisciplinary team-assisted teaching method helps trainees and interns to improve their professional awareness of peritoneal dialysis. At the same time, trainee physicians have positive feedback on the inter-professional team-assisted teaching method. This teaching research finds that the clinical ability development education of trainees and interns can provide cross-professional team-assisted teaching methods to assist clinical teaching guidance.

Keywords: monitor quality, patient safety, health promotion objective, cross-professional team-assisted teaching methods

Procedia PDF Downloads 123
938 Optimal Image Representation for Linear Canonical Transform Multiplexing

Authors: Navdeep Goel, Salvador Gabarda

Abstract:

Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.

Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation

Procedia PDF Downloads 398
937 Toxicological Validation during the Development of New Catalytic Systems Using Air/Liquid Interface Cell Exposure

Authors: M. Al Zallouha, Y. Landkocz, J. Brunet, R. Cousin, J. M. Halket, E. Genty, P. J. Martin, A. Verdin, D. Courcot, S. Siffert, P. Shirali, S. Billet

Abstract:

Toluene is one of the most used Volatile Organic Compounds (VOCs) in the industry. Amongst VOCs, Benzene, Toluene, Ethylbenzene and Xylenes (BTEX) emitted into the atmosphere have a major and direct impact on human health. It is, therefore, necessary to minimize emissions directly at source. Catalytic oxidation is an industrial technique which provides remediation efficiency in the treatment of these organic compounds. However, during operation, the catalysts can release some compounds, called byproducts, more toxic than the original VOCs. The catalytic oxidation of a gas stream containing 1000ppm of toluene on Pd/α-Al2O3 can release a few ppm of benzene, according to the operating temperature of the catalyst. The development of new catalysts must, therefore, include chemical and toxicological validation phases. In this project, A549 human lung cells were exposed in air/liquid interface (Vitrocell®) to gas mixtures derived from the oxidation of toluene with a catalyst of Pd/α-Al2O3. Both exposure concentrations (i.e. 10 and 100% of catalytic emission) resulted in increased gene expression of Xenobiotics Metabolising Enzymes (XME) (CYP2E1 CYP2S1, CYP1A1, CYP1B1, EPHX1, and NQO1). Some of these XMEs are known to be induced by polycyclic organic compounds conventionally not searched during the development of catalysts for VOCs degradation. The increase in gene expression suggests the presence of undetected compounds whose toxicity must be assessed before the adoption of new catalyst. This enhances the relevance of toxicological validation of such systems before scaling-up and marketing.

Keywords: BTEX toxicity, air/liquid interface cell exposure, Vitrocell®, catalytic oxidation

Procedia PDF Downloads 392
936 Predictive Analytics in Oil and Gas Industry

Authors: Suchitra Chnadrashekhar

Abstract:

Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.

Keywords: hydrocarbon, information technology, SAS, predictive analytics

Procedia PDF Downloads 329
935 Designing Agile Product Development Processes by Transferring Mechanisms of Action Used in Agile Software Development

Authors: Guenther Schuh, Michael Riesener, Jan Kantelberg

Abstract:

Due to the fugacity of markets and the reduction of product lifecycles, manufacturing companies from high-wage countries are nowadays faced with the challenge to place more innovative products within even shorter development time on the market. At the same time, volatile customer requirements have to be satisfied in order to successfully differentiate from market competitors. One potential approach to address the explained challenges is provided by agile values and principles. These agile values and principles already proofed their success within software development projects in the form of management frameworks like Scrum or concrete procedure models such as Extreme Programming or Crystal Clear. Those models lead to significant improvements regarding quality, costs and development time and are therefore used within most software development projects. Motivated by the success within the software industry, manufacturing companies have tried to transfer agile mechanisms of action to the development of hardware products ever since. Though first empirical studies show similar effects in the agile development of hardware products, no comprehensive procedure model for the design of development iterations has been developed for hardware development yet due to different constraints of the domains. For this reason, this paper focusses on the design of agile product development processes by transferring mechanisms of action used in agile software development towards product development. This is conducted by decomposing the individual systems 'product development' and 'agile software development' into relevant elements and symbiotically composing the elements of both systems in respect of the design of agile product development processes afterwards. In a first step, existing product development processes are described following existing approaches of the system theory. By analyzing existing case studies from industrial companies as well as academic approaches, characteristic objectives, activities and artefacts are identified within a target-, action- and object-system. In partial model two, mechanisms of action are derived from existing procedure models of agile software development. These mechanisms of action are classified in a superior strategy level, in a system level comprising characteristic, domain-independent activities and their cause-effect relationships as well as in an activity-based element level. Within partial model three, the influence of the identified agile mechanism of action towards the characteristic system elements of product development processes is analyzed. For this reason, target-, action- and object-system of the product development are compared with the strategy-, system- and element-level of agile mechanism of action by using the graph theory. Furthermore, the necessity of existence of activities within iteration can be determined by defining activity-specific degrees of freedom. Based on this analysis, agile product development processes are designed in form of different types of iterations within a last step. By defining iteration-differentiating characteristics and their interdependencies, a logic for the configuration of activities, their form of execution as well as relevant artefacts for the specific iteration is developed. Furthermore, characteristic types of iteration for the agile product development are identified.

Keywords: activity-based process model, agile mechanisms of action, agile product development, degrees of freedom

Procedia PDF Downloads 183
934 Triple Intercell Bar for Electrometallurgical Processes: A Design to Increase PV Energy Utilization

Authors: Eduardo P. Wiechmann, Jorge A. Henríquez, Pablo E. Aqueveque, Luis G. Muñoz

Abstract:

PV energy prices are declining rapidly. To take advantage of the benefits of those prices and lower the carbon footprint, operational practices must be modified. Undoubtedly, it challenges the electrowinning practice to operate at constant current throughout the day. This work presents a technology that contributes in providing modulation capacity to the electrode current distribution system. This is to raise the day time dc current and lower it at night. The system is a triple intercell bar that operates in current-source mode. The design is a capping board free dogbone type of bar that ensures an operation free of short circuits, hot swapability repairs and improved current balance. This current-source system eliminates the resetting currents circulating in equipotential bars. Twin auxiliary connectors are added to the main connectors providing secure current paths to bypass faulty or impaired contacts. All system conductive elements are positioned over a baseboard offering a large heat sink area to the ventilation of a facility. The system works with lower temperature than a conventional busbar. Of these attributes, the cathode current balance property stands out and is paramount for day/night modulation and the use of photovoltaic energy. A design based on a 3D finite element method model predicting electric and thermal performance under various industrial scenarios is presented. Preliminary results obtained in an electrowinning facility with industrial prototypes are included.

Keywords: electrowinning, intercell bars, PV energy, current modulation

Procedia PDF Downloads 137
933 DWDM Network Implementation in the Honduran Telecommunications Company "Hondutel"

Authors: Tannia Vindel, Carlos Mejia, Damaris Araujo, Carlos Velasquez, Darlin Trejo

Abstract:

The DWDM (Dense Wavelenght Division Multiplexing) is in constant growth around the world by consumer demand to meet their needs. Since its inception in this operation arises the need for a system which enable us to expand the communication of an entire nation to improve the computing trends of their societies according to their customs and geographical location. The Honduran Company of Telecommunications (HONDUTEL), provides the internet services and data transport technology with a PDH and SDH, which represents in the Republic of Honduras C. A., the option of viability for the consumer in terms of purchase value and its ease of acquisition; but does not have the efficiency in terms of technological advance and represents an obstacle that limits the long-term socio-economic development in comparison with other countries in the region and to be able to establish a competition between telecommunications companies that are engaged in this heading. For that reason we propose to establish a new technological trend implemented in Europe and that is applied in our country that allows us to provide a data transfer in broadband as it is DWDM, in this way we will have a stable service and quality that will allow us to compete in this globalized world, and that must be replaced by one that would provide a better service and which must be in the forefront. Once implemented the DWDM is build upon the existing resources, such as the equipment used, and you will be given life to a new stage providing a business image to the Republic of Honduras C,A, as a nation, to ensure the data transport and broadband internet to a meaningful relationship. Same benefits in the first instance to existing customers and to all the institutions were bidden to these public and private need of such services.

Keywords: demultiplexers, light detectors, multiplexers, optical amplifiers, optical fibers, PDH, SDH

Procedia PDF Downloads 233
932 Anti-Corruption, an Important Challenge for the Construction Industry!

Authors: Ahmed Stifi, Sascha Gentes, Fritz Gehbauer

Abstract:

The construction industry is perhaps one of the oldest industry of the world. The ancient monuments like the egyptian pyramids, the temples of Greeks and Romans like Parthenon and Pantheon, the robust bridges, old Roman theatres, the citadels and many more are the best testament to that. The industry also has a symbiotic relationship with other . Some of the heavy engineering industry provide construction machineries, chemical industry develop innovative construction materials, finance sector provides fund solutions for complex construction projects and many more. Construction Industry is not only mammoth but also very complex in nature. Because of the complexity, construction industry is prone to various tribulations which may have the propensity to hamper its growth. The comparitive study of this industry with other depicts that it is associated with a state of tardiness and delay especially when we focus on the managerial aspects and the study of triple constraint (time, cost and scope). While some institutes says the complexity associated with it as a major reason, others like lean construction, refers to the wastes produced across the construction process as the prime reason. This paper introduces corruption as one of the prime factors for such delays.To support this many international reports and studies are available depicting that construction industry is one of the most corrupt sectors worldwide, and the corruption can take place throught the project cycle comprising project selection, planning, design, funding, pre-qualification, tendering, execution, operation and maintenance, and even through the reconstrction phase. It also happens in many forms such as bribe, fraud, extortion, collusion, embezzlement and conflict of interest and the self-sufficient. As a solution to cope the corruption in construction industry, the paper introduces the integrity as a key factor and build a new integrity framework to develop and implement an integrity management system for construction companies and construction projects.

Keywords: corruption, construction industry, integrity, lean construction

Procedia PDF Downloads 359
931 Outcome Analysis of Surgical and Nonsurgical Treatment on Indicated Operative Chronic Subdural Hematoma: Serial Case in Cipto Mangunkusumo Hospital Indonesia

Authors: Novie Nuraini, Sari Hanifa, Yetty Ramli

Abstract:

Chronic subdural hematoma (cSDH) is a common condition after head trauma. Although the size of the thickness of cSDH has an important role in the decision to perform surgery, but the size limit of the thickness is not absolute. In this serial case report, we evaluate three case report of cSDH that indicated to get the surgical procedure because of deficit neurologic and neuroimaging finding with subfalcine herniation more than 0.5 cm and hematoma thickness more than one cm. On the first case, the patient got evacuation hematoma procedure, but the second and third case, we did nonsurgical treatment because the patient and family refused to do the operation. We did the conservative treatment with bed rest and mannitol. Serial radiologic evaluation is done when we found worsening condition. We also reevaluated radiologic examination two weeks after the treatment. The results in this serial case report, the first and second case have a good outcome. On the third case, there was a worsening condition, which in this patient there was a comorbid with type two diabetic mellitus, pneumonie and chronic kidney disease. Some conservative treatment such as bed rest, corticosteroid, mannitol or the other hyperosmolar has a good outcome in patient without neurologic deficits, small hematoma, and or patient without comorbid disease. Evacuate hematome is the best choice in cSDH treatment with deficit neurologic finding. Afterall, there is some condition that we can not do the surgical procedure. Serial radiologic examination needed after two weeks to evaluate the treatment or if there is any worsening condition.

Keywords: chronic subdural hematoma, traumatic brain injury, surgical treatment, nonsurgical treatment, outcome

Procedia PDF Downloads 310
930 Modeling of Virtual Power Plant

Authors: Muhammad Fanseem E. M., Rama Satya Satish Kumar, Indrajeet Bhausaheb Bhavar, Deepak M.

Abstract:

Keeping the right balance of electricity between the supply and demand sides of the grid is one of the most important objectives of electrical grid operation. Power generation and demand forecasting are the core of power management and generation scheduling. Large, centralized producing units were used in the construction of conventional power systems in the past. A certain level of balance was possible since the generation kept up with the power demand. However, integrating renewable energy sources into power networks has proven to be a difficult challenge due to its intermittent nature. The power imbalance caused by rising demands and peak loads is negatively affecting power quality and dependability. Demand side management and demand response were one of the solutions, keeping generation the same but altering or rescheduling or shedding completely the load or demand. However, shedding the load or rescheduling is not an efficient way. There comes the significance of virtual power plants. The virtual power plant integrates distributed generation, dispatchable load, and distributed energy storage organically by using complementing control approaches and communication technologies. This would eventually increase the utilization rate and financial advantages of distributed energy resources. Most of the writing on virtual power plant models ignored technical limitations, and modeling was done in favor of a financial or commercial viewpoint. Therefore, this paper aims to address the modeling intricacies of VPPs and their technical limitations, shedding light on a holistic understanding of this innovative power management approach.

Keywords: cost optimization, distributed energy resources, dynamic modeling, model quality tests, power system modeling

Procedia PDF Downloads 37
929 Optimum Performance of the Gas Turbine Power Plant Using Adaptive Neuro-Fuzzy Inference System and Statistical Analysis

Authors: Thamir K. Ibrahim, M. M. Rahman, Marwah Noori Mohammed

Abstract:

This study deals with modeling and performance enhancements of a gas-turbine combined cycle power plant. A clean and safe energy is the greatest challenges to meet the requirements of the green environment. These requirements have given way the long-time governing authority of steam turbine (ST) in the world power generation, and the gas turbine (GT) will replace it. Therefore, it is necessary to predict the characteristics of the GT system and optimize its operating strategy by developing a simulation system. The integrated model and simulation code for exploiting the performance of gas turbine power plant are developed utilizing MATLAB code. The performance code for heavy-duty GT and CCGT power plants are validated with the real power plant of Baiji GT and MARAFIQ CCGT plants the results have been satisfactory. A new technology of correlation was considered for all types of simulation data; whose coefficient of determination (R2) was calculated as 0.9825. Some of the latest launched correlations were checked on the Baiji GT plant and apply error analysis. The GT performance was judged by particular parameters opted from the simulation model and also utilized Adaptive Neuro-Fuzzy System (ANFIS) an advanced new optimization technology. The best thermal efficiency and power output attained were about 56% and 345MW respectively. Thus, the operation conditions and ambient temperature are strongly influenced on the overall performance of the GT. The optimum efficiency and power are found at higher turbine inlet temperatures. It can be comprehended that the developed models are powerful tools for estimating the overall performance of the GT plants.

Keywords: gas turbine, optimization, ANFIS, performance, operating conditions

Procedia PDF Downloads 407