Search results for: drilling operation
965 Quality Approaches for Mass-Produced Fashion: A Study in Malaysian Garment Manufacturing
Authors: N. J. M. Yusof, T. Sabir, J. McLoughlin
Abstract:
Garment manufacturing industry involves sequential processes that are subjected to uncontrollable variations. The industry depends on the skill of labour in handling the varieties of fabrics and accessories, machines, and also a complicated sewing operation. Due to these reasons, garment manufacturers created systems to monitor and control the product’s quality regularly by conducting quality approaches to minimize variation. The aims of this research were to ascertain the quality approaches deployed by Malaysian garment manufacturers in three key areas-quality systems and tools; quality control and types of inspection; sampling procedures chosen for garment inspection. The focus of this research also aimed to distinguish quality approaches used by companies that supplied the finished garments to both domestic and international markets. The feedback from each of company’s representatives was obtained using the online survey, which comprised of five sections and 44 questions on the organizational profile and quality approaches used in the garment industry. The results revealed that almost all companies had established their own mechanism of process control by conducting a series of quality inspection for daily production either it was formally been set up or vice versa. Quality inspection was the predominant quality control activity in the garment manufacturing and the level of complexity of these activities was substantially dictated by the customers. AQL-based sampling was utilized by companies dealing with the export market, whilst almost all the companies that only concentrated on the domestic market were comfortable using their own sampling procedures for garment inspection. This research provides an insight into the implementation of quality approaches that were perceived as important and useful in the garment manufacturing sector, which is truly labour-intensive.Keywords: garment manufacturing, quality approaches, quality control, inspection, Acceptance Quality Limit (AQL), sampling
Procedia PDF Downloads 446964 Experimental Study of Particle Deposition on Leading Edge of Turbine Blade
Authors: Yang Xiao-Jun, Yu Tian-Hao, Hu Ying-Qi
Abstract:
Breathing in foreign objects during the operation of the aircraft engine, impurities in the aircraft fuel and products of incomplete combustion can produce deposits on the surface of the turbine blades. These deposits reduce not only the turbine's operating efficiency but also the life of the turbine blades. Based on the small open wind tunnel, the simulation of deposits on the leading edge of the turbine has been carried out in this work. The effect of film cooling on particulate deposition was investigated. Based on the analysis, the adhesive mechanism for the molten pollutants’ reaching to the turbine surface was simulated by matching the Stokes number, TSP (a dimensionless number characterizing particle phase transition) and Biot number of the test facility and that of the real engine. The thickness distribution and growth trend of the deposits have been observed by high power microscope and infrared camera under different temperature of the main flow, the solidification temperature of the particulate objects, and the blowing ratio. The experimental results from the leading edge particulate deposition demonstrate that the thickness of the deposition increases with time until a quasi-stable thickness is reached, showing a striking effect of the blowing ratio on the deposition. Under different blowing ratios, there exists a large difference in the thickness distribution of the deposition, and the deposition is minimal at the specific blow ratio. In addition, the temperature of main flow and the solidification temperature of the particulate have a great influence on the deposition.Keywords: deposition, experiment, film cooling, leading edge, paraffin particles
Procedia PDF Downloads 148963 Retrofitting Residential Buildings for Energy Efficiency: An Experimental Investigation
Authors: Naseer M. A.
Abstract:
Buildings are major consumers of energy in both their construction and operation. They account for 40% of World’s energy use. It is estimated that 40-60% of this goes for conditioning the indoor environment. In India, like many other countries, the residential buildings have a major share (more than 50%) in the building sector. Of these, single-family units take a mammoth share. The single-family dwelling units in the urban and fringe areas are built in two stories to minimize the building foot print on small land parcels. And quite often, the bedrooms are located in the first floors. The modern buildings are provided with reinforced concrete (RC) roofs that absorb heat throughout the day and radiate the heat into the interiors during the night. The rooms that are occupied in the night, like bedrooms, are having their indoors uncomfortable. This has resulted in the use of active systems like air-conditioners and air coolers, thereby increasing the energy use. An investigation conducted by monitoring the thermal comfort condition in the residential building with RC roofs have proved that the indoors are really uncomfortable in the night hours. A sustainable solution to improve the thermal performance of the RC roofs was developed by an experimental study by continuously monitoring the thermal comfort parameters during summer (the period that is most uncomfortable in temperate climate). The study conducted in the southern peninsular India, prove that retrofitting of existing residential building can give a sustainable solution in abating the ever increasing energy demand especially when it is a fact that these residential buildings that are built for a normal life span of 40 years would continue to consume the energy for the rest of its useful life.Keywords: energy efficiency, thermal comfort, retrofitting, residential buildings
Procedia PDF Downloads 253962 Mathematical Modelling and AI-Based Degradation Analysis of the Second-Life Lithium-Ion Battery Packs for Stationary Applications
Authors: Farhad Salek, Shahaboddin Resalati
Abstract:
The production of electric vehicles (EVs) featuring lithium-ion battery technology has substantially escalated over the past decade, demonstrating a steady and persistent upward trajectory. The imminent retirement of electric vehicle (EV) batteries after approximately eight years underscores the critical need for their redirection towards recycling, a task complicated by the current inadequacy of recycling infrastructures globally. A potential solution for such concerns involves extending the operational lifespan of electric vehicle (EV) batteries through their utilization in stationary energy storage systems during secondary applications. Such adoptions, however, require addressing the safety concerns associated with batteries’ knee points and thermal runaways. This paper develops an accurate mathematical model representative of the second-life battery packs from a cell-to-pack scale using an equivalent circuit model (ECM) methodology. Neural network algorithms are employed to forecast the degradation parameters based on the EV batteries' aging history to develop a degradation model. The degradation model is integrated with the ECM to reflect the impacts of the cycle aging mechanism on battery parameters during operation. The developed model is tested under real-life load profiles to evaluate the life span of the batteries in various operating conditions. The methodology and the algorithms introduced in this paper can be considered the basis for Battery Management System (BMS) design and techno-economic analysis of such technologies.Keywords: second life battery, electric vehicles, degradation, neural network
Procedia PDF Downloads 66961 Efficiency and Scale Elasticity in Network Data Envelopment Analysis: An Application to International Tourist Hotels in Taiwan
Authors: Li-Hsueh Chen
Abstract:
Efficient operation is more and more important for managers of hotels. Unlike the manufacturing industry, hotels cannot store their products. In addition, many hotels provide room service, and food and beverage service simultaneously. When efficiencies of hotels are evaluated, the internal structure should be considered. Hence, based on the operational characteristics of hotels, this study proposes a DEA model to simultaneously assess the efficiencies among the room production division, food and beverage production division, room service division and food and beverage service division. However, not only the enhancement of efficiency but also the adjustment of scale can improve the performance. In terms of the adjustment of scale, scale elasticity or returns to scale can help to managers to make decisions concerning expansion or contraction. In order to construct a reasonable approach to measure the efficiencies and scale elasticities of hotels, this study builds an alternative variable-returns-to-scale-based two-stage network DEA model with the combination of parallel and series structures to explore the scale elasticities of the whole system, room production division, food and beverage production division, room service division and food and beverage service division based on the data of international tourist hotel industry in Taiwan. The results may provide valuable information on operational performance and scale for managers and decision makers.Keywords: efficiency, scale elasticity, network data envelopment analysis, international tourist hotel
Procedia PDF Downloads 225960 Assessing the Actions of the Farm Mangers to Execute Field Operations at Opportune Times
Authors: G. Edwards, N. Dybro, L. J. Munkholm, C. G. Sørensen
Abstract:
Planning agricultural operations requires an understanding of when fields are ready for operations. However determining a field’s readiness is a difficult process that can involve large amounts of data and an experienced farm manager. A consequence of this is that operations are often executed when fields are unready, or partially unready, which can compromise results incurring environmental impacts, decreased yield and increased operational costs. In order to assess timeliness of operations’ execution, a new scheme is introduced to quantify the aptitude of farm managers to plan operations. Two criteria are presented by which the execution of operations can be evaluated as to their exploitation of a field’s readiness window. A dataset containing the execution dates of spring and autumn operations on 93 fields in Iowa, USA, over two years, was considered as an example and used to demonstrate how operations’ executions can be evaluated. The execution dates were compared with simulated data to gain a measure of how disparate the actual execution was from the ideal execution. The presented tool is able to evaluate the spring operations better than the autumn operations as required data was lacking to correctly parameterise the crop model. Further work is needed on the underlying models of the decision support tool in order for its situational knowledge to emulate reality more consistently. However the assessment methods and evaluation criteria presented offer a standard by which operations' execution proficiency can be quantified and could be used to identify farm managers who require decisional support when planning operations, or as a means of incentivising and promoting the use of sustainable farming practices.Keywords: operation management, field readiness, sustainable farming, workability
Procedia PDF Downloads 390959 Postoperative Pain Management: Efficacy of Caudal Tramadol in Pediatric Lower Abdominal Surgery: A Randomized Clinical Study
Authors: Reza Farahmand Rad, Farnad Imani, Azadeh Emami, Reza Salehi, Ali Reza Ghavamy, Ali Nima Shariat
Abstract:
Background: One of the methods of pain control after pediatric surgical procedures is regional techniques, including caudal block, despite their limitations. Objectives: In this study, the pain score and complications of caudal tramadol were evaluated in pediatrics following lower abdom- inal surgery. Methods: In this study, 46 children aged 3 to 10 years were allocated into two equal groups (R and TR) for performing caudal anal- gesia after lower abdominal surgery. The injectate contained 0.2% ropivacaine 1 mL/kg in the R group (control group) and tramadol (2 mg/kg) and ropivacaine in the TR group. The pain score, duration of pain relief, amount of paracetamol consumption, hemody- namic alterations, and possible complications at specific times (1, 2, and 6 hours) were evaluated in both groups. Results: No considerable difference was observed in the pain score between the groups in the first and second hours (P > 0.05). However, in the sixth hour, the TR group had a significantly lower pain score than the R group (P < 0.05). Compared to the R group, the TR group had a longer period of analgesia and lower consumption of analgesic drugs (P < 0.05). Heart rate and blood pressure differences were not significant between the two groups (P > 0.05). Similarly, the duration of operation and recovery time were not remarkably different between the two groups (P > 0.05). Complications had no apparent differences between these two groups, as well (P > 0.05). Conclusions: In this study, the addition of tramadol to caudal ropivacaine in pediatric lower abdominal surgery promoted pain relief without complications.Keywords: tramadol, ropivacaine, caudal block, pediatric, lower abdominal surgery, postoperative pain
Procedia PDF Downloads 16958 The Femoral Eversion Endarterectomy Technique with Transection: Safety and Efficacy
Authors: Hansraj Riteesh Bookun, Emily Maree Stevens, Jarryd Leigh Solomon, Anthony Chan
Abstract:
Objective: This was a retrospective cross-sectional study evaluating the safety and efficacy of femoral endarterectomy using the eversion technique with transection as opposed to the conventional endarterectomy technique with either vein or synthetic patch arterioplasty. Methods: Between 2010 to mid 2017, 19 patients with mean age of 75.4 years, underwent eversion femoral endarterectomy with transection by a single surgeon. There were 13 males (68.4%), and the comorbid burden was as follows: ischaemic heart disease (53.3%), diabetes (43.8%), stage 4 kidney impairment (13.3%) and current or ex-smoking (73.3%). The indications were claudication (45.5%), rest pain (18.2%) and tissue loss (36.3%). Results: The technical success rate was 100%. One patient required a blood transfusion following bleeding from intraoperative losses. Two patients required blood transfusions from low post operative haemogloblin concentrations – one of them in the context of myelodysplastic syndrome. There were no unexpected returns to theatre. The mean length of stay was 11.5 days with two patients having inpatient stays of 36 and 50 days respectively due to the need for rehabilitation. There was one death unrelated to the operation. Conclusion: The eversion technique with transection is safe and effective with low complication rates and a normally expected length of stay. It poses the advantage of not requiring a synthetic patch. This technique features minimal extraneous dissection as there is no need to harvest vein for a patch. Additionally, future endovascular interventions can be performed by puncturing the native vessel. There is no change to the femoral bifurcation anatomy after this technique. We posit that this is a useful adjunct to the surgeon’s panoply of vascular surgical techniques.Keywords: endarterectomy, eversion, femoral, vascular
Procedia PDF Downloads 202957 Exploring Polar Syntactic Effects of Verbal Extensions in Basà Language
Authors: Imoh Philip
Abstract:
This work investigates four verbal extensions; two in each set resulting in two opposite effects of the valency of verbs in Basà language. Basà language is an indigenous language spoken in Kogi, Nasarawa, Benue, Niger states and all the Federal Capital Territory (FCT) councils. Crozier & Blench (1992) and Blench & Williamson (1988) classify Basà as belonging to Proto–Kru, under the sub-phylum Western –Kru. It studies the effects of such morphosyntactic operations in Basà language with special focus on ‘reflexives’ ‘reciprocals’ versus ‘causativization’ and ‘applicativization’ both sets are characterized by polar syntactic processes of either decreasing or increasing the verb’s valency by one argument vis-à-vis the basic number of arguments, but by the similar morphological processes. In addition to my native intuitions as a native speaker of Basà language, data elicited for this work include discourse observation, staged and elicited spoken data from fluent native speakers. The paper argues that affixes attached to the verb root, result in either deriving an intransitive verb from a transitive one or a transitive verb from a bi/ditransitive verb and equally increase the verb’s valence deriving either a bitransitive verb from a transitive verb or a transitive verb from a intransitive one. Where the operation increases the verb’s valency, it triggers a transformation of arguments in the derived structure. In this case, the applied arguments displace the inherent ones. This investigation can stimulate further study on other transformations that are either syntactic or morphosyntactic in Basà and can also be replicated in other African and non-African languages.Keywords: verbal extension, valency, reflexive, reciprocal, causativization, applicativization, Basà
Procedia PDF Downloads 205956 Scheduling Method for Electric Heater in HEMS considering User’s Comfort
Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo, Jin-O Kim
Abstract:
Home Energy Management System (HEMS) which makes the residential consumers contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it also represents impacts of the comfort level on the scheduling result.Keywords: load scheduling, usage pattern, user’s comfort, copula function, branch and bound, electric heater
Procedia PDF Downloads 587955 Solving the Economic Load Dispatch Problem Using Differential Evolution
Authors: Alaa Sheta
Abstract:
Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.Keywords: economic load dispatch, power systems, optimization, differential evolution
Procedia PDF Downloads 284954 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science
Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier
Abstract:
Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and comparedKeywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis
Procedia PDF Downloads 120953 Study of Pipes Scaling of Purified Wastewater Intended for the Irrigation of Agadir Golf Grass
Authors: A. Driouiche, S. Mohareb, A. Hadfi
Abstract:
In Morocco’s Agadir region, the reuse of treated wastewater for irrigation of green spaces has faced the problem of scaling of the pipes of these waters. This research paper aims at studying the phenomenon of scaling caused by the treated wastewater from the Mzar sewage treatment plant. These waters are used in the irrigation of golf turf for the Ocean Golf Resort. Ocean Golf, located about 10 km from the center of the city of Agadir, is one of the most important recreation centers in Morocco. The course is a Belt Collins design with 27 holes, and is quite open with deep challenging bunkers. The formation of solid deposits in the irrigation systems has led to a decrease in their lifetime and, consequently, a loss of load and performance. Thus, the sprinklers used in golf turf irrigation are plugged in the first weeks of operation. To study this phenomenon, the wastewater used for the irrigation of the golf turf was taken and analyzed at various points, and also samples of scale formed in the circuits of the passage of these waters were characterized. This characterization of the scale was performed by X-ray fluorescence spectrometry, X-ray diffraction (XRD), thermogravimetric analysis (TGA), differential thermal analysis (DTA), and scanning electron microscopy (SEM). The results of the physicochemical analysis of the waters show that they are full of bicarbonates (653 mg/L), chloride (478 mg/L), nitrate (412 mg/L), sodium (425 mg/L) and calcium (199mg/L). Their pH is slightly alkaline. The analysis of the scale reveals that it is rich in calcium and phosphorus. It is formed of calcium carbonate (CaCO₃), silica (SiO₂), calcium silicate (Ca₂SiO₄), hydroxylapatite (Ca₁₀P₆O₂₆), calcium carbonate and phosphate (Ca₁₀(PO₄) 6CO₃) and silicate calcium and magnesium (Ca₅MgSi₃O₁₂).Keywords: Agadir, irrigation, scaling water, wastewater
Procedia PDF Downloads 123952 Design and Development of Power Sources for Plasma Actuators to Control Flow Separation
Authors: Himanshu J. Bahirat, Apoorva S. Janawlekar
Abstract:
Plasma actuators are essential for aerodynamic flow separation control due to their lack of mechanical parts, lightweight, and high response frequency, which have numerous applications in hypersonic or supersonic aircraft. The working of these actuators is based on the formation of a low-temperature plasma between a pair of parallel electrodes by the application of a high-voltage AC signal across the electrodes, after which air molecules from the air surrounding the electrodes are ionized and accelerated through the electric field. The high-frequency operation is required in dielectric discharge barriers to ensure plasma stability. To carry out flow separation control in a hypersonic flow, the optimal design and construction of a power supply to generate dielectric barrier discharges is carried out in this paper. In this paper, it is aspired to construct a simplified circuit topology to emulate the dielectric barrier discharge and study its various frequency responses. The power supply can generate high voltage pulses up to 20kV at the repetitive frequency range of 20-50kHz with an input power of 500W. The power supply has been designed to be short circuit proof and can endure variable plasma load conditions. Its general outline is to charge a capacitor through a half-bridge converter and then later discharge it through a step-up transformer at a high frequency in order to generate high voltage pulses. After simulating the circuit, the PCB design and, eventually, lab tests are carried out to study its effectiveness in controlling flow separation.Keywords: aircraft propulsion, dielectric barrier discharge, flow separation control, power source
Procedia PDF Downloads 130951 Cross Professional Team-Assisted Teaching Effectiveness
Authors: Shan-Yu Hsu, Hsin-Shu Huang
Abstract:
The main purpose of this teaching research is to design an interdisciplinary team-assisted teaching method for trainees and interns and review the effectiveness of this teaching method on trainees' understanding of peritoneal dialysis. The teaching research object is the fifth and sixth-grade trainees in a medical center's medical school. The teaching methods include media teaching, demonstration of technical operation, face-to-face communication with patients, special case discussions, and field visits to the peritoneal dialysis room. Evaluate learning effectiveness before, after, and verbally. Statistical analysis was performed using the SPSS paired-sample t-test to analyze whether there is a difference in peritoneal dialysis professional cognition before and after teaching intervention. Descriptive statistics show that the average score of the previous test is 74.44, the standard deviation is 9.34, the average score of the post-test is 95.56, and the standard deviation is 5.06. The results of the t-test of the paired samples are shown as p-value = 0.006, showing the peritoneal dialysis professional cognitive test. Significant differences were observed before and after. The interdisciplinary team-assisted teaching method helps trainees and interns to improve their professional awareness of peritoneal dialysis. At the same time, trainee physicians have positive feedback on the inter-professional team-assisted teaching method. This teaching research finds that the clinical ability development education of trainees and interns can provide cross-professional team-assisted teaching methods to assist clinical teaching guidance.Keywords: monitor quality, patient safety, health promotion objective, cross-professional team-assisted teaching methods
Procedia PDF Downloads 148950 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation
Procedia PDF Downloads 415949 Toxicological Validation during the Development of New Catalytic Systems Using Air/Liquid Interface Cell Exposure
Authors: M. Al Zallouha, Y. Landkocz, J. Brunet, R. Cousin, J. M. Halket, E. Genty, P. J. Martin, A. Verdin, D. Courcot, S. Siffert, P. Shirali, S. Billet
Abstract:
Toluene is one of the most used Volatile Organic Compounds (VOCs) in the industry. Amongst VOCs, Benzene, Toluene, Ethylbenzene and Xylenes (BTEX) emitted into the atmosphere have a major and direct impact on human health. It is, therefore, necessary to minimize emissions directly at source. Catalytic oxidation is an industrial technique which provides remediation efficiency in the treatment of these organic compounds. However, during operation, the catalysts can release some compounds, called byproducts, more toxic than the original VOCs. The catalytic oxidation of a gas stream containing 1000ppm of toluene on Pd/α-Al2O3 can release a few ppm of benzene, according to the operating temperature of the catalyst. The development of new catalysts must, therefore, include chemical and toxicological validation phases. In this project, A549 human lung cells were exposed in air/liquid interface (Vitrocell®) to gas mixtures derived from the oxidation of toluene with a catalyst of Pd/α-Al2O3. Both exposure concentrations (i.e. 10 and 100% of catalytic emission) resulted in increased gene expression of Xenobiotics Metabolising Enzymes (XME) (CYP2E1 CYP2S1, CYP1A1, CYP1B1, EPHX1, and NQO1). Some of these XMEs are known to be induced by polycyclic organic compounds conventionally not searched during the development of catalysts for VOCs degradation. The increase in gene expression suggests the presence of undetected compounds whose toxicity must be assessed before the adoption of new catalyst. This enhances the relevance of toxicological validation of such systems before scaling-up and marketing.Keywords: BTEX toxicity, air/liquid interface cell exposure, Vitrocell®, catalytic oxidation
Procedia PDF Downloads 411948 Predictive Analytics in Oil and Gas Industry
Authors: Suchitra Chnadrashekhar
Abstract:
Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.Keywords: hydrocarbon, information technology, SAS, predictive analytics
Procedia PDF Downloads 362947 Triple Intercell Bar for Electrometallurgical Processes: A Design to Increase PV Energy Utilization
Authors: Eduardo P. Wiechmann, Jorge A. Henríquez, Pablo E. Aqueveque, Luis G. Muñoz
Abstract:
PV energy prices are declining rapidly. To take advantage of the benefits of those prices and lower the carbon footprint, operational practices must be modified. Undoubtedly, it challenges the electrowinning practice to operate at constant current throughout the day. This work presents a technology that contributes in providing modulation capacity to the electrode current distribution system. This is to raise the day time dc current and lower it at night. The system is a triple intercell bar that operates in current-source mode. The design is a capping board free dogbone type of bar that ensures an operation free of short circuits, hot swapability repairs and improved current balance. This current-source system eliminates the resetting currents circulating in equipotential bars. Twin auxiliary connectors are added to the main connectors providing secure current paths to bypass faulty or impaired contacts. All system conductive elements are positioned over a baseboard offering a large heat sink area to the ventilation of a facility. The system works with lower temperature than a conventional busbar. Of these attributes, the cathode current balance property stands out and is paramount for day/night modulation and the use of photovoltaic energy. A design based on a 3D finite element method model predicting electric and thermal performance under various industrial scenarios is presented. Preliminary results obtained in an electrowinning facility with industrial prototypes are included.Keywords: electrowinning, intercell bars, PV energy, current modulation
Procedia PDF Downloads 155946 DWDM Network Implementation in the Honduran Telecommunications Company "Hondutel"
Authors: Tannia Vindel, Carlos Mejia, Damaris Araujo, Carlos Velasquez, Darlin Trejo
Abstract:
The DWDM (Dense Wavelenght Division Multiplexing) is in constant growth around the world by consumer demand to meet their needs. Since its inception in this operation arises the need for a system which enable us to expand the communication of an entire nation to improve the computing trends of their societies according to their customs and geographical location. The Honduran Company of Telecommunications (HONDUTEL), provides the internet services and data transport technology with a PDH and SDH, which represents in the Republic of Honduras C. A., the option of viability for the consumer in terms of purchase value and its ease of acquisition; but does not have the efficiency in terms of technological advance and represents an obstacle that limits the long-term socio-economic development in comparison with other countries in the region and to be able to establish a competition between telecommunications companies that are engaged in this heading. For that reason we propose to establish a new technological trend implemented in Europe and that is applied in our country that allows us to provide a data transfer in broadband as it is DWDM, in this way we will have a stable service and quality that will allow us to compete in this globalized world, and that must be replaced by one that would provide a better service and which must be in the forefront. Once implemented the DWDM is build upon the existing resources, such as the equipment used, and you will be given life to a new stage providing a business image to the Republic of Honduras C,A, as a nation, to ensure the data transport and broadband internet to a meaningful relationship. Same benefits in the first instance to existing customers and to all the institutions were bidden to these public and private need of such services.Keywords: demultiplexers, light detectors, multiplexers, optical amplifiers, optical fibers, PDH, SDH
Procedia PDF Downloads 265945 Geological, Geochronological, Geochemical, and Geophysical Characteristics of the Dalli Porphyry Cu-Au Deposit in Central Iran; Implications for Exploration
Authors: Hooshag Asadi Haroni, Maryam Veiskarami, Yongjun Lu
Abstract:
The Dalli gold-rich porphyry deposit (17 Mt @ 0.5% Cu and 0.65 g/t Au) is located in the Urumieh-Dokhtar Magmatic Arc (UDMA), a small segment of the Tethyan metallogenic belt, hosting several porphyry Cu (Mo-Au) systems in Iran. This research characterizes the Dalli deposit to define exploration criteria in advanced exploration such as the drilling of possible blind porphyry centers. Geological map, trench/drill hole geochemical and ground magnetic data, and age dating and isotope trace element analyses, carried out at the John De Laeter Research Center of Curtin University, were used to characterize the Delli deposit. Mineralization at Dalli is hosted by NE-trending quartz-diorite porphyry stocks (~ 200m in diameter) intruded by a wall-rock andesite porphyry. Disseminated and stockwork Cu-Au mineralization is related to potassic alteration, comprising magnetite, late K-feldspar and biotite, and quartz-sericite-specularite overprint, surrounded by extensive barren argillic and propylitic alterations. In the peripheries of the porphyry centers, there are N-trending vuggy quartz veins, hosting epithermal Au-Ag-As-Sb mineralization. Geochemical analyses of drill core samples showed that the core of the porphyry stocks is low-grade, whereas the high-grade disseminated and stockwork mineralization (~ 1% Cu and ~ 1.2 g/t Au) occurred at the contact of the porphyry stocks and andesite porphyry. Geochemical studies of the drill hole and trench samples showed a strong correlation between Cu and Au and both show a second-order correlation with Fe and As. Magnetic survey revealed two significant magnetic anomalies, associated with intensive potassic alteration, in the reduced-to-the-pole magnetic map of the area. A relatively weaker magnetic anomaly, showing no surface porphyry expressions, is located on a lithocap, consisting of advanced argillic alteration, vuggy quartz veins, and surface expressions of epithermal geochemical signatures. The association of the lithocap and the weak magnetic anomaly could be indicative of a hidden mineralized porphyry center. Litho-geochemical analyses of the least altered Dalli intrusions and volcanic rocks indicated high Sr/Y (49-61) and Eu/Eu* (0.89-0.92), features typical of Cu porphyries. The U-Pb dating of zircons of the mineralized quartz diorite and andesite porphyry, carried out by laser ablation inductively coupled plasma mass spectrometry, yielded magmatic crystallization ages of 15.4-16.0 Ma (Middle Miocene). The zircon trace element concentrations of Dalli are characterized by high Eu/Eu* (0.3-0.8), (Ce/Nd)/Y (0.01-0.3), and 10000*(Eu/Eu*)/Y (2-15) ratios, similar to fertile porphyry suites such as the giant Sar-Cheshmeh and Qulong porphyry Cu deposits along the Tethyan belt. This suggests that the Middle Miocene Dalli intrusions are fertile and require extensive deep drillings to define their potential. Chondrite-normalized rare earth element (REE) patterns show no significant Eu anomalies, and are characterized by light-REE enrichments (La/Sm)n = 2.57–6.40). In normalized multi-element diagrams, analyzed rocks are characterized by enrichments in large ion lithophile elements (LILE) and depletions in high field strength elements (HFSE), and display typical features of subduction-related calc-alkaline magmas. The characteristics of the Dalli deposit provided several recognition criteria for detailed exploration of Cu-Au porphyry deposits and highlighted the importance of the UDMA as a potentially significant, economically important, but relatively underexplored porphyry province.Keywords: porphyry, gold, geochronology, magnetic, exploration
Procedia PDF Downloads 64944 Design and Implementation of Low-code Model-building Methods
Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu
Abstract:
This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment
Procedia PDF Downloads 31943 Anti-Corruption, an Important Challenge for the Construction Industry!
Authors: Ahmed Stifi, Sascha Gentes, Fritz Gehbauer
Abstract:
The construction industry is perhaps one of the oldest industry of the world. The ancient monuments like the egyptian pyramids, the temples of Greeks and Romans like Parthenon and Pantheon, the robust bridges, old Roman theatres, the citadels and many more are the best testament to that. The industry also has a symbiotic relationship with other . Some of the heavy engineering industry provide construction machineries, chemical industry develop innovative construction materials, finance sector provides fund solutions for complex construction projects and many more. Construction Industry is not only mammoth but also very complex in nature. Because of the complexity, construction industry is prone to various tribulations which may have the propensity to hamper its growth. The comparitive study of this industry with other depicts that it is associated with a state of tardiness and delay especially when we focus on the managerial aspects and the study of triple constraint (time, cost and scope). While some institutes says the complexity associated with it as a major reason, others like lean construction, refers to the wastes produced across the construction process as the prime reason. This paper introduces corruption as one of the prime factors for such delays.To support this many international reports and studies are available depicting that construction industry is one of the most corrupt sectors worldwide, and the corruption can take place throught the project cycle comprising project selection, planning, design, funding, pre-qualification, tendering, execution, operation and maintenance, and even through the reconstrction phase. It also happens in many forms such as bribe, fraud, extortion, collusion, embezzlement and conflict of interest and the self-sufficient. As a solution to cope the corruption in construction industry, the paper introduces the integrity as a key factor and build a new integrity framework to develop and implement an integrity management system for construction companies and construction projects.Keywords: corruption, construction industry, integrity, lean construction
Procedia PDF Downloads 378942 Outcome Analysis of Surgical and Nonsurgical Treatment on Indicated Operative Chronic Subdural Hematoma: Serial Case in Cipto Mangunkusumo Hospital Indonesia
Authors: Novie Nuraini, Sari Hanifa, Yetty Ramli
Abstract:
Chronic subdural hematoma (cSDH) is a common condition after head trauma. Although the size of the thickness of cSDH has an important role in the decision to perform surgery, but the size limit of the thickness is not absolute. In this serial case report, we evaluate three case report of cSDH that indicated to get the surgical procedure because of deficit neurologic and neuroimaging finding with subfalcine herniation more than 0.5 cm and hematoma thickness more than one cm. On the first case, the patient got evacuation hematoma procedure, but the second and third case, we did nonsurgical treatment because the patient and family refused to do the operation. We did the conservative treatment with bed rest and mannitol. Serial radiologic evaluation is done when we found worsening condition. We also reevaluated radiologic examination two weeks after the treatment. The results in this serial case report, the first and second case have a good outcome. On the third case, there was a worsening condition, which in this patient there was a comorbid with type two diabetic mellitus, pneumonie and chronic kidney disease. Some conservative treatment such as bed rest, corticosteroid, mannitol or the other hyperosmolar has a good outcome in patient without neurologic deficits, small hematoma, and or patient without comorbid disease. Evacuate hematome is the best choice in cSDH treatment with deficit neurologic finding. Afterall, there is some condition that we can not do the surgical procedure. Serial radiologic examination needed after two weeks to evaluate the treatment or if there is any worsening condition.Keywords: chronic subdural hematoma, traumatic brain injury, surgical treatment, nonsurgical treatment, outcome
Procedia PDF Downloads 332941 Modeling of Virtual Power Plant
Authors: Muhammad Fanseem E. M., Rama Satya Satish Kumar, Indrajeet Bhausaheb Bhavar, Deepak M.
Abstract:
Keeping the right balance of electricity between the supply and demand sides of the grid is one of the most important objectives of electrical grid operation. Power generation and demand forecasting are the core of power management and generation scheduling. Large, centralized producing units were used in the construction of conventional power systems in the past. A certain level of balance was possible since the generation kept up with the power demand. However, integrating renewable energy sources into power networks has proven to be a difficult challenge due to its intermittent nature. The power imbalance caused by rising demands and peak loads is negatively affecting power quality and dependability. Demand side management and demand response were one of the solutions, keeping generation the same but altering or rescheduling or shedding completely the load or demand. However, shedding the load or rescheduling is not an efficient way. There comes the significance of virtual power plants. The virtual power plant integrates distributed generation, dispatchable load, and distributed energy storage organically by using complementing control approaches and communication technologies. This would eventually increase the utilization rate and financial advantages of distributed energy resources. Most of the writing on virtual power plant models ignored technical limitations, and modeling was done in favor of a financial or commercial viewpoint. Therefore, this paper aims to address the modeling intricacies of VPPs and their technical limitations, shedding light on a holistic understanding of this innovative power management approach.Keywords: cost optimization, distributed energy resources, dynamic modeling, model quality tests, power system modeling
Procedia PDF Downloads 66940 Optimum Performance of the Gas Turbine Power Plant Using Adaptive Neuro-Fuzzy Inference System and Statistical Analysis
Authors: Thamir K. Ibrahim, M. M. Rahman, Marwah Noori Mohammed
Abstract:
This study deals with modeling and performance enhancements of a gas-turbine combined cycle power plant. A clean and safe energy is the greatest challenges to meet the requirements of the green environment. These requirements have given way the long-time governing authority of steam turbine (ST) in the world power generation, and the gas turbine (GT) will replace it. Therefore, it is necessary to predict the characteristics of the GT system and optimize its operating strategy by developing a simulation system. The integrated model and simulation code for exploiting the performance of gas turbine power plant are developed utilizing MATLAB code. The performance code for heavy-duty GT and CCGT power plants are validated with the real power plant of Baiji GT and MARAFIQ CCGT plants the results have been satisfactory. A new technology of correlation was considered for all types of simulation data; whose coefficient of determination (R2) was calculated as 0.9825. Some of the latest launched correlations were checked on the Baiji GT plant and apply error analysis. The GT performance was judged by particular parameters opted from the simulation model and also utilized Adaptive Neuro-Fuzzy System (ANFIS) an advanced new optimization technology. The best thermal efficiency and power output attained were about 56% and 345MW respectively. Thus, the operation conditions and ambient temperature are strongly influenced on the overall performance of the GT. The optimum efficiency and power are found at higher turbine inlet temperatures. It can be comprehended that the developed models are powerful tools for estimating the overall performance of the GT plants.Keywords: gas turbine, optimization, ANFIS, performance, operating conditions
Procedia PDF Downloads 427939 Heritage Impact Assessment Policy within Western Balkans, Albania
Authors: Anisa Duraj
Abstract:
As usually acknowledged, cultural heritage is the weakest component in EIA studies. The role of heritage impact assessment (HIA) in development projects is not often accounted for, and in those cases where it is, HIA is considered as a reactive response and not as a solutions provider. Because of continuous development projects, in most cases, heritage is unconsidered and often put under threat. Cultural protection and development challenges ask for prudent legal regulation and appropriate policy implementation. The challenges become even more peculiar in underdeveloped countries or endangered areas, which are generally characterized by numerous legal constraints. Therefore, the need for strategic proposals for HIA is of high importance. In order to trigger HIA as a proactive operation in the IA process and make sure to cover cultural heritage in the whole EIA framework, an appropriate system of evaluation of impacts should be provided. To obtain the required results for HIA, this last must be part of a regional policy, which will address and guide development projects toward a proper evaluation of their impacts affecting heritage. In order to get a clearer picture of existing gabs but also new possibilities for HIA, this paper will focus on the Western Balkans region and the undergoing changes that it faces. Concerning continuous development pressure in the region and within the aspiration of the Western Balkans countries to join the European Union (EU) as member states, attention should be paid to new development policies under the EU directives for conducting EIAs, and accurate support is required for the restructuration of existing policies as well as for the implementation of the UN Agenda for SDGs. In the framework of new emerging needs, if HIA is taken into account, the outcome would be an inclusive regional program that would help to overcome marginality issues of spaces and people.Keywords: cultural heritage, impact assessment, SDGs, urban development, western Balkans, regional policy, HIA, EIA
Procedia PDF Downloads 119938 Philippine Foreign Policy in the West Philippine Sea after the 2012 Scarborough Standoff: Implications for National Security
Authors: Rhisan Mae Enriquez-Morales
Abstract:
The primary concern of this study is to answer the question: How does the Philippine government formulate its foreign policy with respect to its territorial claims over areas in the West Philippine Sea after the Scarborough standoff in April 2012? Specifically, the study seeks to provide understanding on the political process in the formulation of foreign policy relating to the Philippine claims in the West Philippine Sea after the 2012 Scarborough Standoff, by looking into the relationship of bureaucracies and how it influences the decision-making process. Secondly, this study aims to determine the long and short term foreign policies of the Philippines with respect to its territorial claims over the West Philippine Sea. Lastly, this study seeks to determine the implication of Philippine foreign policy in settling the West Philippine Sea dispute on the country’s national security. The Bureaucratic Politics Model (BPM) in Foreign Policy Analysis (FPA) is the framework utilized in this study, which focuses primarily on the relationship of bureaucracies in the formulation of foreign policy and how these agencies influence the process of foreign policy formulation. The findings of this study reveal that: first, the Philippines foreign policy in the West Philippine Sea continues to develop to address current developments in the WPS. Second, as the government requires demilitarization there is a shift from traditional to non-traditional security approach. This shift caused inconvenience from the defense sector particularly the Navy thinking that they are being deprived of their traditional roles. Lastly, the Philippine government’s greater emphasis on internal security operation implies the need to reassess its security concerns and look into territorial security.Keywords: bureaucratic politics model, foreign policy analysis, security, West Philippine sea
Procedia PDF Downloads 397937 AgriInnoConnect Pro System Using Iot and Firebase Console
Authors: Amit Barde, Dipali Khatave, Vaishali Savale, Atharva Chavan, Sapna Wagaj, Aditya Jilla
Abstract:
AgriInnoConnect Pro is an advanced agricultural automation system designed to enhance irrigation efficiency and overall farm management through IoT technology. Using MIT App Inventor, Telegram, Arduino IDE, and Firebase Console, it provides a user-friendly interface for farmers. Key hardware includes soil moisture sensors, DHT11 sensors, a 12V motor, a solenoid valve, a stepdown transformer, Smart Fencing, and AC switches. The system operates in automatic and manual modes. In automatic mode, the ESP32 microcontroller monitors soil moisture and autonomously controls irrigation to optimize water usage. In manual mode, users can control the irrigation motor via a mobile app. Telegram bots enable remote operation of the solenoid valve and electric fencing, enhancing farm security. Additionally, the system upgrades conventional devices to smart ones using AC switches, broadening automation capabilities. AgriInnoConnect Pro aims to improve farm productivity and resource management, addressing the critical need for sustainable water conservation and providing a comprehensive solution for modern farm management. The integration of smart technologies in AgriInnoConnect Pro ensures precision farming practices, promoting efficient resource allocation and sustainable agricultural development.Keywords: agricultural automation, IoT, soil moisture sensor, ESP32, MIT app inventor, telegram bot, smart farming, remote control, firebase console
Procedia PDF Downloads 47936 Study on the Process of Detumbling Space Target by Laser
Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming
Abstract:
The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.Keywords: detumbling, laser ablation drive, space target, space debris remove
Procedia PDF Downloads 86