Search results for: computation time
14729 Effects of Feed Forms on Growth Pattern, Behavioural Responses and Fecal Microbial Load of Pigs Fed Diets Supplemented with Saccaromyces cereviseae Probiotics
Authors: O. A. Adebiyi, A. O. Oni, A. O. K. Adeshehinwa, I. O. Adejumo
Abstract:
In forty nine (49) days, twenty four (24) growing pigs (Landrace x Large white) with an average weight of 17 ±2.1kg were allocated to four experimental treatments T1 (dry mash without probiotics), T2 (wet feed without probiotics), T3 (dry mash + Saccaromyces cereviseae probiotics) and T4 (wet feed + Saccaromyces cereviseae probiotics) which were replicated three times with two pigs per replicate in a completely randomised design. The basal feed (dry feed) was formulated to meet the nutritional requirement of the animal with crude protein of 18.00% and metabolisable energy of 2784.00kcal/kgME. Growth pattern, faecal microbial load and behavioural activities (eating, drinking, physical pen interaction and frequency of visiting the drinking troughs) were accessed. Pigs fed dry mash without probiotics (T1) had the highest daily feed intake among the experimental animals (1.10kg) while pigs on supplemented diets (T3 and T4) had an average daily feed intake of 0.95kg. However, the feed conversion ratio was significantly (p < 0.05) affected with pigs on T3 having least value of 6.26 compared those on T4 (wet feed + Saccaromyces cereviseae) with means of 7.41. Total organism counts varied significantly (p < 0.05) with pigs on T1, T2, T3 and T4 with mean values of 179.50 x106cfu; 132.00 x 106cfu; 32.00 x 106cfu and 64.50 x 106cfu respectively. Coliform count was also significantly (p < 0.05) different among the treatments with corresponding values of 117.50 x 106cfu; 49.00 x 106cfu, 8.00 x 106cfu for pigs in T1, T2 and T4 respectively. The faecal Saccaromyces cereviseae was significantly lower in pigs fed supplemented diets compared to their counterparts on unsupplemented diets. This could be due to the inability of yeast organisms to be voided easily through feaces. The pigs in T1 spent the most time eating (7.88%) while their counterparts on T3 spent the least time eating. The corresponding physical pen interaction times expressed in percentage of a day for pigs in T1, T2, T3 and T4 are 6.22%, 5.92%, 4.04% and 4.80% respectively. These behavioural responses exhibited by these pigs (T3) showed that little amount of dry feed supplemented with probiotics is needed for better performance. The water intake increases as a result of the dryness of the feed with consequent decrease in pen interaction and more time was spent resting than engaging in other possible vice-habit like fighting or tail biting. Pigs fed dry feed (T3) which was supplemented with Saccaromyces cereviseae probiotics had a better overall performance, least faecal microbial load than wet fed pigs either supplemented with Saccaromyces cereviseae or non-supplemented.Keywords: behaviour, feed forms, feed utilization, growth, microbial
Procedia PDF Downloads 35914728 An Experimental Study on the Optimum Installation of Fire Detector for Early Stage Fire Detecting in Rack-Type Warehouses
Authors: Ki Ok Choi, Sung Ho Hong, Dong Suck Kim, Don Mook Choi
Abstract:
Rack type warehouses are different from general buildings in the kinds, amount, and arrangement of stored goods, so the fire risk of rack type warehouses is different from those buildings. The fire pattern of rack type warehouses is different in combustion characteristic and storing condition of stored goods. The initial fire burning rate is different in the surface condition of materials, but the running time of fire is closely related with the kinds of stored materials and stored conditions. The stored goods of the warehouse are consisted of diverse combustibles, combustible liquid, and so on. Fire detection time may be delayed because the residents are less than office and commercial buildings. If fire detectors installed in rack type warehouses are inadaptable, the fire of the warehouse may be the great fire because of delaying of fire detection. In this paper, we studied what kinds of fire detectors are optimized in early detecting of rack type warehouse fire by real-scale fire tests. The fire detectors used in the tests are rate of rise type, fixed type, photo electric type, and aspirating type detectors. We considered optimum fire detecting method in rack type warehouses suggested by the response characteristic and comparative analysis of the fire detectors.Keywords: fire detector, rack, response characteristic, warehouse
Procedia PDF Downloads 74814727 Design an Algorithm for Software Development in CBSE Envrionment Using Feed Forward Neural Network
Authors: Amit Verma, Pardeep Kaur
Abstract:
In software development organizations, Component based Software engineering (CBSE) is emerging paradigm for software development and gained wide acceptance as it often results in increase quality of software product within development time and budget. In component reusability, main challenges are the right component identification from large repositories at right time. The major objective of this work is to provide efficient algorithm for storage and effective retrieval of components using neural network and parameters based on user choice through clustering. This research paper aims to propose an algorithm that provides error free and automatic process (for retrieval of the components) while reuse of the component. In this algorithm, keywords (or components) are extracted from software document, after by applying k mean clustering algorithm. Then weights assigned to those keywords based on their frequency and after assigning weights, ANN predicts whether correct weight is assigned to keywords (or components) or not, otherwise it back propagates in to initial step (re-assign the weights). In last, store those all keywords into repositories for effective retrieval. Proposed algorithm is very effective in the error correction and detection with user base choice while choice of component for reusability for efficient retrieval is there.Keywords: component based development, clustering, back propagation algorithm, keyword based retrieval
Procedia PDF Downloads 38114726 A Neural Network Control for Voltage Balancing in Three-Phase Electric Power System
Authors: Dana M. Ragab, Jasim A. Ghaeb
Abstract:
The three-phase power system suffers from different challenging problems, e.g. voltage unbalance conditions at the load side. The voltage unbalance usually degrades the power quality of the electric power system. Several techniques can be considered for load balancing including load reconfiguration, static synchronous compensator and static reactive power compensator. In this work an efficient neural network is designed to control the unbalanced condition in the Aqaba-Qatrana-South Amman (AQSA) electric power system. It is designed for highly enhanced response time of the reactive compensator for voltage balancing. The neural network is developed to determine the appropriate set of firing angles required for the thyristor-controlled reactor to balance the three load voltages accurately and quickly. The parameters of AQSA power system are considered in the laboratory model, and several test cases have been conducted to test and validate the proposed technique capabilities. The results have shown a high performance of the proposed Neural Network Control (NNC) technique for correcting the voltage unbalance conditions at three-phase load based on accuracy and response time.Keywords: three-phase power system, reactive power control, voltage unbalance factor, neural network, power quality
Procedia PDF Downloads 20014725 Malware Beaconing Detection by Mining Large-scale DNS Logs for Targeted Attack Identification
Authors: Andrii Shalaginov, Katrin Franke, Xiongwei Huang
Abstract:
One of the leading problems in Cyber Security today is the emergence of targeted attacks conducted by adversaries with access to sophisticated tools. These attacks usually steal senior level employee system privileges, in order to gain unauthorized access to confidential knowledge and valuable intellectual property. Malware used for initial compromise of the systems are sophisticated and may target zero-day vulnerabilities. In this work we utilize common behaviour of malware called ”beacon”, which implies that infected hosts communicate to Command and Control servers at regular intervals that have relatively small time variations. By analysing such beacon activity through passive network monitoring, it is possible to detect potential malware infections. So, we focus on time gaps as indicators of possible C2 activity in targeted enterprise networks. We represent DNS log files as a graph, whose vertices are destination domains and edges are timestamps. Then by using four periodicity detection algorithms for each pair of internal-external communications, we check timestamp sequences to identify the beacon activities. Finally, based on the graph structure, we infer the existence of other infected hosts and malicious domains enrolled in the attack activities.Keywords: malware detection, network security, targeted attack, computational intelligence
Procedia PDF Downloads 26814724 Dynamic Environmental Impact Study during the Construction of the French Nuclear Power Plants
Authors: A. Er-Raki, D. Hartmann, J. P. Belaud, S. Negny
Abstract:
This paper has a double purpose: firstly, a literature review of the life cycle analysis (LCA) and secondly a comparison between conventional (static) LCA and multi-level dynamic LCA on the following items: (i) inventories evolution with time (ii) temporal evolution of the databases. The first part of the paper summarizes the state of the art of the static LCA approach. The different static LCA limits have been identified and especially the non-consideration of the spatial and temporal evolution in the inventory, for the characterization factors (FCs) and into the databases. Then a description of the different levels of integration of the notion of temporality in life cycle analysis studies was made. In the second part, the dynamic inventory has been evaluated firstly for a single nuclear plant and secondly for the entire French nuclear power fleet by taking into account the construction durations of all the plants. In addition, the databases have been adapted by integrating the temporal variability of the French energy mix. Several iterations were used to converge towards the real environmental impact of the energy mix. Another adaptation of the databases to take into account the temporal evolution of the market data of the raw material was made. An identification of the energy mix of the time studied was based on an extrapolation of the production reference values of each means of production. An application to the construction of the French nuclear power plants from 1971 to 2000 has been performed, in which a dynamic inventory of raw material has been evaluated. Then the impacts were characterized by the ILCD 2011 characterization method. In order to compare with a purely static approach, a static impact assessment was made with the V 3.4 Ecoinvent data sheets without adaptation and a static inventory considering that all the power stations would have been built at the same time. Finally, a comparison between static and dynamic LCA approaches was set up to determine the gap between them for each of the two levels of integration. The results were analyzed to identify the contribution of the evolving nuclear power fleet construction to the total environmental impacts of the French energy mix during the same period. An equivalent strategy using a dynamic approach will further be applied to identify the environmental impacts that different scenarios of the energy transition could bring, allowing to choose the best energy mix from an environmental viewpoint.Keywords: LCA, static, dynamic, inventory, construction, nuclear energy, energy mix, energy transition
Procedia PDF Downloads 10614723 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 28614722 Liquid-Liquid Extraction of Uranium (VI) from Aqueous Solution Using 1-Hydroxyalkylidene-1,1-Diphosphonic Acids
Authors: Mustapha Bouhoun Ali, Ahmed Yacine Badjah Hadj Ahmed, Mouloud Attou, Abdel Hamid Elias, Mohamed Amine Didi
Abstract:
The extraction of uranium(VI) from aqueous solutions has been investigated using 1-hydroxyhexadecylidene-1,1-diphosphonic acid (HHDPA) and 1-hydroxydodecylidene-1,1-diphosphonic acid (HDDPA), which were synthesized and characterized by elemental analysis and by FT-IR, 1H NMR, 31P NMR spectroscopy. In this paper, we propose a tentative assignment for the shifts of those two ligands and their specific complexes with uranium(VI). We carried out the extraction of uranium(VI) by HHDPA and HDDPA from [carbon tetrachloride + 2-octanol (v/v: 90%/10%)] solutions. Various factors such as contact time, pH, organic/aqueous phase ratio and extractant concentration were considered. The optimum conditions obtained were: contact time = 20 min, organic/aqueous phase ratio = 1, pH value = 3.0 and extractant concentration = 0.3M. The extraction yields are more significant in the case of the HHDPA which is equipped with a hydrocarbon chain, longer than that of the HDDPA. Logarithmic plots of the uranium(VI) distribution ratio vs. pHeq and the extractant concentration showed that the ratio of extractant to extracted uranium(VI) (ligand/metal) is 2:1. The formula of the complex of uranium(VI) with the HHDPA and the DHDPA is UO2(H3L)2 (HHDPA and DHDPA are denoted as H4L). A spectroscopic analysis has showed that coordination of uranium(VI) takes place via oxygen atoms.Keywords: liquid-liquid extraction, uranium(VI), 1-hydroxyalkylidene-1, 1-diphosphonic acids, HHDPA, HDDPA, aqueous solution
Procedia PDF Downloads 53014721 A Stochastic Analytic Hierarchy Process Based Weighting Model for Sustainability Measurement in an Organization
Authors: Faramarz Khosravi, Gokhan Izbirak
Abstract:
A weighted statistical stochastic based Analytical Hierarchy Process (AHP) model for modeling the potential barriers and enablers of sustainability for measuring and assessing the sustainability level is proposed. For context-dependent potential barriers and enablers, the proposed model takes the basis of the properties of the variables describing the sustainability functions and was developed into a realistic analytical model for the sustainable behavior of an organization. This thus serves as a means for measuring the sustainability of the organization. The main focus of this paper was the application of the AHP tool in a statistically-based model for measuring sustainability. Hence a strong weighted stochastic AHP based procedure was achieved. A case study scenario of a widely reported major Canadian electric utility was adopted to demonstrate the applicability of the developed model and comparatively examined its results with those of an equal-weighted model method. Variations in the sustainability of a company, as fluctuations, were figured out during the time. In the results obtained, sustainability index for successive years changed form 73.12%, 79.02%, 74.31%, 76.65%, 80.49%, 79.81%, 79.83% to more exact values 73.32%, 77.72%, 76.76%, 79.41%, 81.93%, 79.72%, and 80,45% according to priorities of factors that have found by expert views, respectively. By obtaining relatively necessary informative measurement indicators, the model can practically and effectively evaluate the sustainability extent of any organization and also to determine fluctuations in the organization over time.Keywords: AHP, sustainability fluctuation, environmental indicators, performance measurement
Procedia PDF Downloads 12514720 Activation of Mitophagy and Autophagy in Familial Forms of Parkinson's Disease, as a Potential Strategy for Cell Protection
Authors: Nafisa Komilova, Plamena Angelova, Andrey Abramov, Ulugbek Mirkhodjaev
Abstract:
Parkinson’s disease (PD) is a progressive neurodegenerative disorder which is induced by the loss of dopaminergic neurons in the midbrain. The mechanism of neurodegeneration is associated with the aggregation of misfolded proteins, oxidative stress, and mitochondrial disfunction. Considering this, the process of removal of unwanted organelles or proteins by autophagy is vitally important in neurons, and activation of these processes could be protective in PD. Short-time acidification of cytosol can activate mitophagy and autophagy, and here we used sodium pyruvate and sodium lactate in human fibroblasts with PD mutations (Pink1, Pink1/Park2, α-syn triplication, A53T) to induce changes in intracellular pH. We have found that both lactate and pyruvate in millimolar concentrations can induce short-time acidification of cytosol in these cells. It induced activation of mitophagy and autophagy in control and PD fibroblasts and protected against cell death. Importantly, the application of lactate to acute brain slices of control and Pink1 knockout mice also induced a reduction of pH in neurons and astrocytes that increase the level of mitophagy. Thus, acidification of cytosol by compounds which play important role in cell metabolism also can activate mitophagy and autophagy and protect cells in the familial form of PD.Keywords: Parkinson's disease, mutations, mitophagy, autophagy
Procedia PDF Downloads 19814719 Development of Lipid Architectonics for Improving Efficacy and Ameliorating the Oral Bioavailability of Elvitegravir
Authors: Bushra Nabi, Saleha Rehman, Sanjula Baboota, Javed Ali
Abstract:
Aim: The objective of research undertaken is analytical method validation (HPLC method) of an anti-HIV drug Elvitegravir (EVG). Additionally carrying out the forced degradation studies of the drug under different stress conditions to determine its stability. It is envisaged in order to determine the suitable technique for drug estimation, which would be employed in further research. Furthermore, comparative pharmacokinetic profile of the drug from lipid architectonics and drug suspension would be obtained post oral administration. Method: Lipid Architectonics (LA) of EVR was formulated using probe sonication technique and optimized using QbD (Box-Behnken design). For the estimation of drug during further analysis HPLC method has been validation on the parameters (Linearity, Precision, Accuracy, Robustness) and Limit of Detection (LOD) and Limit of Quantification (LOQ) has been determined. Furthermore, HPLC quantification of forced degradation studies was carried out under different stress conditions (acid induced, base induced, oxidative, photolytic and thermal). For pharmacokinetic (PK) study, Albino Wistar rats were used weighing between 200-250g. Different formulations were given per oral route, and blood was collected at designated time intervals. A plasma concentration profile over time was plotted from which the following parameters were determined:Keywords: AIDS, Elvitegravir, HPLC, nanostructured lipid carriers, pharmacokinetics
Procedia PDF Downloads 14114718 On a Transient Magnetohydrodynamics Heat Transfer Within Radiative Porous Channel Due to Convective Boundary Condition
Authors: Bashiru Abdullahi, Isah Bala Yabo, Ibrahim Yakubu Seini
Abstract:
In this paper, the steady/transient MHD heat transfer within radiative porous channel due to convective boundary conditions is considered. The solution of the steady-state and that of the transient version were conveyed by Perturbation and Finite difference methods respectively. The heat transfer mechanism of the present work ascertains the influence of Biot number〖(B〗_i1), magnetizing parameter (M), radiation parameter(R), temperature difference, suction/injection(S) Grashof number (Gr) and time (t) on velocity (u), temperature(θ), skin friction(τ), and Nusselt number (Nu). The results established were discussed with the help of a line graph. It was found that the velocity, temperature, and skin friction decay with increasing suction/injection and magnetizing parameters while the Nusselt number upsurges with suction/injection at y = 0 and falls at y =1. The steady-state solution was in perfect agreement with the transient version for a significant value of time t. It is interesting to report that the Biot number has a cogent influence consequently, as its values upsurge the result of the present work slant the extended literature.Keywords: heat transfer, thermal radiation, porous channel, MHD, transient, convective boundary condition
Procedia PDF Downloads 12414717 Literary Interpretation and Systematic-Structural Analysis of the Titles of the Works “The Day Lasts More than a Hundred Years”, “Doomsday”
Authors: Bahor Bahriddinovna Turaeva
Abstract:
The article provides a structural analysis of the titles of the famous Kyrgyz writer Chingiz Aitmatov’s creative works “The Day Lasts More Than a Hundred Years”, “Doomsday”. The author’s creative purpose in naming the work of art, the role of the elements of the plot, and the composition of the novels in revealing the essence of the title are explained. The criteria that are important in naming the author’s works in different genres are classified, and the titles that mean artistic time and artistic space are studied separately. Chronotope is being concerned as the literary-aesthetic category in world literary studies, expressing the scope of the universe interpretation, the author’s outlook and imagination regarding the world foundation, defining personages, and the composition means of expressing the sequence and duration of the events. A creative comprehension of the chronotope as a means of arranging the work composition, structure and constructing an epic field of the text demands a special approach to understanding the aesthetic character of the work. Since the chronotope includes all the elements of a fictional work, it is impossible to present the plot, composition, conflict, system of characters, feelings, and mood of the characters without the description of the chronotope. In the following development of the scientific-theoretical thought in the world, the chronotope is accepted to be one of the poetic means to demonstrate reality as well as to be a literary process that is basic for the expression of reality in the compositional construction and illustration of the plot relying on the writer’s intention and the ideological conception of the literary work. Literary time enables one to cognate the literary world picture created by the author in terms of the descriptive subject and object of the work. Therefore, one of the topical tasks of modern Uzbek literary studies is to describe historical evidence, event, the life of outstanding people, the chronology of the near past based on the literary time; on the example of the creative works of a certain period, creators or an individual writer are analyzed in separate or comparative-typological aspect.Keywords: novel, title, chronotope, motive, epigraph, analepsis, structural analysis, plot line, composition
Procedia PDF Downloads 7814716 Topic Prominence and Temporal Encoding in Mandarin Chinese
Authors: Tzu-I Chiang
Abstract:
A central question for finite-nonfinite distinction in Mandarin Chinese is how does Mandarin encode temporal information without the grammatical contrast between past and present tense. Moreover, how do L2 learners of Mandarin whose native language is English and whose L1 system has tense morphology, acquire the temporal encoding system in L2 Mandarin? The current study reports preliminary findings on the relationship between topic prominence and the temporal encoding in L1 and L2 Chinese. Oral narratives data from 30 natives and learners of Mandarin Chinese were collected via a film-retell task. In terms of coding, predicates collected from the narratives were transcribed and then coded based on four major verb types: n-degree Statives (quality-STA), point-scale Statives (status-STA), n-atom EVENT (ACT), and point EVENT (resultative-ACT). How native speakers and non-native speakers started retelling the story was calculated. Results of the study show that native speakers of Chinese tend to express Topic Time (TT) syntactically at the topic position; whereas L2 learners of Chinese across levels rely mainly on the default time encoded in the event types. Moreover, as the proficiency level of the learner increases, learners’ appropriate use of the event predicates increased, which supports the argument that L2 development of temporal encoding is affected by lexical aspect.Keywords: topic prominence, temporal encoding, lexical aspect, L2 acquisition
Procedia PDF Downloads 20314715 Adsorbent Removal of Oil Spills Using Bentonite Clay
Authors: Saad Mohamed Elsaid Abdelrahman
Abstract:
The adsorption method is one of the best modern techniques used in removing pollutants, especially organic hydrocarbon compounds, from polluted water. Through this research, bentonite clay can be used to remove organic hydrocarbon compounds, such as heptane and octane, resulting from oil spills in seawater. Bentonite clay can be obtained from the Kholayaz area, located north of Jeddah, at a distance of 80 km. Chemical analysis shows that bentonite clay consists of a mixture of silica, alumina and oxides of some elements. Bentonite clay can be activated in order to raise its adsorption efficiency and to make it suitable for removing pollutants using an ionic organic solvent. It is necessary to study some of the factors that could be in the efficiency of bentonite clay in removing oily organic compounds, such as the time of contact of the clay with heptane and octane solutions, pH and temperature, in order to reach the highest adsorption capacity of bentonite clay. The temperature can be a few degrees Celsius higher. The adsorption capacity of the clay decreases when the temperature is raised more than 4°C to reach its lowest value at the temperature of 50°C. The results show that the friction time of 30 minutes and the pH of 6.8 is the best conditions to obtain the highest adsorption capacity of the clay, 467 mg in the case of heptane and 385 mg in the case of octane compound. Experiments conducted on bentonite clay were encouraging to select it to remove heavy molecular weight pollutants such as petroleum compounds under study.Keywords: adsorbent, bentonite clay, oil spills, removal
Procedia PDF Downloads 9214714 Multi-Factor Optimization Method through Machine Learning in Building Envelope Design: Focusing on Perforated Metal Façade
Authors: Jinwooung Kim, Jae-Hwan Jung, Seong-Jun Kim, Sung-Ah Kim
Abstract:
Because the building envelope has a significant impact on the operation and maintenance stage of the building, designing the facade considering the performance can improve the performance of the building and lower the maintenance cost of the building. In general, however, optimizing two or more performance factors confronts the limits of time and computational tools. The optimization phase typically repeats infinitely until a series of processes that generate alternatives and analyze the generated alternatives achieve the desired performance. In particular, as complex geometry or precision increases, computational resources and time are prohibitive to find the required performance, so an optimization methodology is needed to deal with this. Instead of directly analyzing all the alternatives in the optimization process, applying experimental techniques (heuristic method) learned through experimentation and experience can reduce resource waste. This study proposes and verifies a method to optimize the double envelope of a building composed of a perforated panel using machine learning to the design geometry and quantitative performance. The proposed method is to achieve the required performance with fewer resources by supplementing the existing method which cannot calculate the complex shape of the perforated panel.Keywords: building envelope, machine learning, perforated metal, multi-factor optimization, façade
Procedia PDF Downloads 22514713 Mathematical modeling of the calculation of the absorbed dose in uranium production workers with the genetic effects.
Authors: P. Kazymbet, G. Abildinova, K.Makhambetov, M. Bakhtin, D. Rybalkina, K. Zhumadilov
Abstract:
Conducted cytogenetic research in workers Stepnogorsk Mining-Chemical Combine (Akmola region) with the study of 26341 chromosomal metaphase. Using a regression analysis with program DataFit, version 5.0, dependence between exposure dose and the following cytogenetic exponents has been studied: frequency of aberrant cells, frequency of chromosomal aberrations, frequency of the amounts of dicentric chromosomes, and centric rings. Experimental data on calibration curves "dose-effect" enabled the development of a mathematical model, allowing on data of the frequency of aberrant cells, chromosome aberrations, the amounts of dicentric chromosomes and centric rings calculate the absorbed dose at the time of the study. In the dose range of 0.1 Gy to 5.0 Gy dependence cytogenetic parameters on the dose had the following equation: Y = 0,0067е^0,3307х (R2 = 0,8206) – for frequency of chromosomal aberrations; Y = 0,0057е^0,3161х (R2 = 0,8832) –for frequency of cells with chromosomal aberrations; Y =5 Е-0,5е^0,6383 (R2 = 0,6321) – or frequency of the amounts of dicentric chromosomes and centric rings on cells. On the basis of cytogenetic parameters and regression equations calculated absorbed dose in workers of uranium production at the time of the study did not exceed 0.3 Gy.Keywords: Stepnogorsk, mathematical modeling, cytogenetic, dicentric chromosomes
Procedia PDF Downloads 48314712 Conformal Invariance and F(R,T) Gravity
Authors: P. Y. Tsyba, O. V. Razina, E. Güdekli, R. Myrzakulov
Abstract:
In this paper, we consider the equation of motion for the F(R,T) gravity on their property of conformal invariance. It is shown that in the general case such a theory is not conformally invariant. Special cases for the functions v and u, in which the properties of the theory can appear, were studied.Keywords: conformal invariance, gravity, space-time, metric
Procedia PDF Downloads 66714711 Optimization of Bills Assignment to Different Skill-Levels of Data Entry Operators in a Business Process Outsourcing Industry
Authors: M. S. Maglasang, S. O. Palacio, L. P. Ogdoc
Abstract:
Business Process Outsourcing has been one of the fastest growing and emerging industry in the Philippines today. Unlike most of the contact service centers, more popularly known as "call centers", The BPO Industry’s primary outsourced service is performing audits of the global clients' logistics. As a service industry, manpower is considered as the most important yet the most expensive resource in the company. Because of this, there is a need to maximize the human resources so people are effectively and efficiently utilized. The main purpose of the study is to optimize the current manpower resources through effective distribution and assignment of different types of bills to the different skill-level of data entry operators. The assignment model parameters include the average observed time matrix gathered from through time study, which incorporates the learning curve concept. Subsequently, a simulation model was made to duplicate the arrival rate of demand which includes the different batches and types of bill per day. Next, a mathematical linear programming model was formulated. Its objective is to minimize direct labor cost per bill by allocating the different types of bills to the different skill-levels of operators. Finally, a hypothesis test was done to validate the model, comparing the actual and simulated results. The analysis of results revealed that the there’s low utilization of effective capacity because of its failure to determine the product-mix, skill-mix, and simulated demand as model parameters. Moreover, failure to consider the effects of learning curve leads to overestimation of labor needs. From 107 current number of operators, the proposed model gives a result of 79 operators. This results to an increase of utilization of effective capacity to 14.94%. It is recommended that the excess 28 operators would be reallocated to the other areas of the department. Finally, a manpower capacity planning model is also recommended in support to management’s decisions on what to do when the current capacity would reach its limit with the expected increasing demand.Keywords: optimization modelling, linear programming, simulation, time and motion study, capacity planning
Procedia PDF Downloads 52014710 Formulation of a Stress Management Program for Human Error Prevention in Nuclear Power Plants
Authors: Hyeon-Kyo Lim, Tong-il Jang, Yong-Hee Lee
Abstract:
As for any nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. Thus, for accident prevention, it is quite indispensable to analyze and to manage the influence of any factor which may raise the possibility of human errors. Among lots factors, stress has been reported to have significant influence on human performance. Stress level of a person may fluctuate over time. To handle the possibility over time, robust stress management program is required, especially in nuclear power plants. Therefore, to overcome the possibility of human errors, this study aimed to develop a stress management program as a part of Fitness-for-Duty (FFD) Program for the workers in nuclear power plants. The meaning of FFD might be somewhat different by research objectives, appropriate definition of FFD was accomplished in this study with special reference to human error prevention, and diverse stress factors were elicited for management of human error susceptibility. In addition, with consideration of conventional FFD management programs, appropriate tests and interventions were introduced over the whole employment cycle including selection and screening of workers, job allocation, job rotation, and disemployment as well as Employee-Assistance-Program (EAP). The results showed that most tools mainly concentrated their weights on common organizational factors such as Demands, Supports, and Relationships in sequence, which were referred as major stress factors.Keywords: human error, accident prevention, work performance, stress, fatigue
Procedia PDF Downloads 32714709 Cognitive Performance and Physiological Stress during an Expedition in Antarctica
Authors: Andrée-Anne Parent, Alain-Steve Comtois
Abstract:
The Antarctica environment can be a great challenge for human exploration. Explorers need to be focused on the task and require the physical abilities to succeed and survive in complete autonomy in this hostile environment. The aim of this study was to observe cognitive performance and physiological stress with a biomarker (cortisol) and hand grip strength during an expedition in Antarctica. A total of 6 explorers were in complete autonomous exploration on the Forbidden Plateau in Antarctica to reach unknown summits during a 30 day period. The Stroop Test, a simple reaction time, and mood scale (PANAS) tests were performed every week during the expedition. Saliva samples were taken before sailing to Antarctica, the first day on the continent, after the mission on the continent and on the boat return trip. Furthermore, hair samples were taken before and after the expedition. The results were analyzed with SPSS using ANOVA repeated measures. The Stroop and mood scale results are presented in the following order: 1) before sailing to Antarctica, 2) the first day on the continent, 3) after the mission on the continent and 4) on the boat return trip. No significant difference was observed with the Stroop (759±166 ms, 850±114 ms, 772±179 ms and 833±105 ms, respectively) and the PANAS (39.5 ±5.7, 40.5±5, 41.8±6.9, 37.3±5.8 positive emotions, and 17.5±2.3, 18.2±5, 18.3±8.6, 15.8±5.4 negative emotions, respectively) (p>0.05). However, there appears to be an improvement at the end of the second week. Furthermore, the simple reaction time was significantly lower at the end of the second week, a moment where important decisions were taken about the mission, vs the week before (416±39 ms vs 459.8±39 ms respectively; p=0.030). Furthermore, the saliva cortisol was not significantly different (p>0.05) possibly due to important variations and seemed to reach a peak on the first day on the continent. However, the cortisol from the hair pre and post expedition increased significantly (2.4±0.5 pg/mg pre-expedition and 16.7±9.2 pg/mg post-expedition, p=0.013) showing important stress during the expedition. Moreover, no significant difference was observed on the grip strength except between after the mission on the continent and after the boat return trip (91.5±21 kg vs 85±19 kg, p=0.20). In conclusion, the cognitive performance does not seem to be affected during the expedition. Furthermore, it seems to increase for specific important events where the crew seemed to focus on the present task. The physiological stress does not seem to change significantly at specific moments, however, a global pre-post mission measure can be important and for this reason, for long-term missions, a pre-expedition baseline measure is important for crewmembers.Keywords: Antarctica, cognitive performance, expedition, physiological adaptation, reaction time
Procedia PDF Downloads 24614708 Surveillance of Artemisinin Resistance Markers and Their Impact on Treatment Outcomes in Malaria Patients in an Endemic Area of South-Western Nigeria
Authors: Abiodun Amusan, Olugbenga Akinola, Kazeem Akano, María Hernández-Castañeda, Jenna Dick, Akintunde Sowunmi, Geoffrey Hart, Grace Gbotosho
Abstract:
Introduction: Artemisinin-based Combination Therapy (ACTs) is the cornerstone malaria treatment option in most malaria-endemic countries. Unfortunately, the malaria control effort is constantly being threatened by resistance of Plasmodium falciparum to ACTs. The recent evidence of artemisinin resistance in East Africa and its possibility of spreading to other African regions portends an imminent health catastrophe. This study aimed at evaluating the occurrence, prevalence, and influence of artemisinin-resistance markers on treatment outcomes in Ibadan before and after post-adoption of artemisinin combination therapy (ACTs) in Nigeria in 2005. Method: The study involved day zero dry blood spot (DBS) obtained from malaria patients during retrospective (2000-2005) and prospective (2021) studies. A cohort in the prospective study received oral dihydroartemisinin-piperaquine and underwent a 42-day follow-up to observe treatment outcomes. Genomic DNA was extracted from the DBS samples using a QIAamp blood extraction kit. Fragments of P. falciparum kelch13 (Pfkelch13), P. falciparum coronin (Pfcoronin), P. falciparum multidrug resistance 2 (PfMDR2), and P. falciparum chloroquine resistance transporter (PfCRT) genes were amplified and sequenced on a sanger sequencing platform to identify artemisinin resistance-associated mutations. Mutations were identified by aligning sequenced data with reference sequences obtained from the National Center for Biotechnology Information. Data were analyzed using descriptive statistics and student t-tests. Results: Mean parasite clearance time (PCT) and fever clearance time (FCT) were 2.1 ± 0.6 days (95% CI: 1.97-2.24) and 1.3 ± 0.7 days (95% CI: 1.1-1.6) respectively. Four mutations, K189T [34/53(64.2%)], R255K [2/53(3.8%)], K189N [1/53(1.9%)] and N217H [1/53(1.9%)] were identified within the N-terminal (Coiled-coil containing) domain of Pfkelch13. No artemisinin resistance-associated mutation usually found within the β-propeller domain of the Pfkelch13 gene was found in these analyzed samples. However, K189T and R255K mutations showed a significant correlation with longer parasite clearance time in the patients (P<0.002). The observed Pfkelch13 gene changes did not influence the baseline mean parasitemia (P = 0.44). P76S [17/100 (17%)] and V62M [1/100 (1%)] changes were identified in the Pfcoronin gene fragment without any influence on the parasitological parameters. No change was observed in the PfMDR2 gene, while no artemisinin resistance-associated mutation was found in the PfCRT gene. Furthermore, a sample each in the retrospective study contained the Pfkelch13 K189T and Pfcoronin P76S mutations. Conclusion: The study revealed absence of genetic-based evidence of artemisinin resistance in the study population at the time of study. The high frequency of K189T Pfkelch13 mutation and its correlation with increased parasite clearance time in this study may depict geographical variation of resistance mediators and imminent artemisinin resistance, respectively. The study also revealed an inherent potential of parasites to harbour drug-resistant genotypes before the introduction of ACTs in Nigeria.Keywords: artemisinin resistance, plasmodium falciparum, Pfkelch13 mutations, Pfcoronin
Procedia PDF Downloads 5214707 Enhancement of Energy Harvesting-Enabled Decode and Forward Cooperative Cognitive Radio System
Authors: Ojo Samson Iyanda, Adeleke Oluseye A., Ojo Oluwaseun A.
Abstract:
Recent developments in the Wireless communication (WC) community has necessitated a paradigm shift in the effective usage of network resources to provide better Quality of Service (QoS) to wireless subscribers. However, the daily increase in the number of users accessing WC services makes frequency spectrum a valuable yet limited resource. Energy harvesting-enabled Decode and Forward Cooperative Cognitive Radio (DFCCR) used to solve this problem faced significant challenges in achieving efficient performance and signal insecurity due to channel fading and broadcast nature of the transmitted signal. Hence, this paper enhanced the performance of the existing DFCCR. PU signal is propagated from the source at different time slots using time diversity. The different versions of the transmitted signal are received at the SU’s transceiver. The received signal at the SU transceiver is decoded and SU superimposes its own information on the decoded signal using exclusive OR (XOR) rule. Jamming signal is created at the SU node and added to the SU transmitting signal. Outage Probability (OP) and Secrecy Capacity (SC) are derived to evaluate the performance of the proposed technique. The proposed energy harvesting-enabled DFCCR enhanced the performance of existing technique with 65% reduction in OP and 50% improvement in SC.Keywords: cognitive radio, RF energy harvesting, decode and forward, secrecy capacity
Procedia PDF Downloads 1114706 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.Keywords: wavelet transform, computational error, computational duration, strong ground motion data
Procedia PDF Downloads 38014705 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 14014704 Bulk Amounts of Linear and Cyclic Polypeptides on Our Hand within a Short Time
Abstract:
Polypeptides with defined peptide sequences illustrate the power of remarkable applications in drug delivery, tissue engineering, sensing and catalysis. Especially the cyclic polypeptides, the distinctive topological architecture imparts many characteristic properties comparing to linear polypeptides. Here, a facile and highly efficient strategy for the synthesis of linear and cyclic polypeptides is reported using N-heterocyclic carbenes (NHCs)-mediated ring-opening polymerization (ROP) of α-amino acid N-carboxyanhydrides (NCA) in the presence or absence of primary amine initiator. The polymerization proceeds rapidly in a quasi-living manner, allowing access to linear and cyclic polypeptides of well-defined chain length and narrow polydispersity, as evidenced by nuclear magnetic resonance spectrum (1H NMR and 13C NMR spectra) and size exclusion chromatography (SEC) analysis. The cyclic architecture of the polypeptides was further verified by matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectra (MALDI-TOF MS) and electrospray ionization (ESI) mass spectra, as well as viscosity studies. This approach can also simplify workup procedures and make bulk scale synthesis possible, which thereby opens avenues for practical uses in diverse areas, opening up the new generation of polypeptide synthesis.Keywords: α-amino acid N-carboxyanhydrides, living polymerization, polypeptides, N-heterocyclic carbenes, ring-opening polymerization
Procedia PDF Downloads 16814703 Partnering With Key Stakeholders for Successful Implementation of Inhaled Analgesia for Specific Emergency Department Presentations
Authors: Sarah Hazelwood, Janice Hay
Abstract:
Methoxyflurane is an inhaled analgesic administered via a disposable inhaler, which has been used in Australia for 40 years for the management of pain in children & adults. However, there is a lack of data for methoxyflurane as a frontline analgesic medication within the emergency department (ED). This study will investigate the usefulness of methoxyflurane in a private inner-city ED. The study concluded that the inclusion of all key stakeholders in the prescribing, administering & use of this new process led to comprehensive uptake & vastly positive outcomes for consumer & health professionals. Method: A 12-week prospective pilot study was completed utilizing patients presenting to the ED in pain (numeric pain rating score > 4) that fit the requirement of methoxyflurane use (as outlined in the Australian Prescriber information package). Nurses completed a formatted spreadsheet for each interaction where methoxyflurane was used. Patient demographics, day, time, initial numeric pain score, analgesic response time, the reason for use, staff concern (free text), & patient feedback (free text), & discharge time was documented. When clinical concern was raised, the researcher retrieved & reviewed patient notes. Results: 140 methoxyflurane inhalers were used. 60% of patients were 31 years of age & over (n=82) with 16% aged 70+. The gender split; 51% male: 49% female. Trauma-related pain (57%) saw the highest use of administration, with the evening hours (1500-2259) seeing the greatest numbers used (39%). Tuesday, Thursday & Sunday shared the highest daily use throughout the study. A minimum numerical pain score of 4/10 (n=13, 9%), with the ranges of 5 - 7/10 (moderate pain) being given by almost 50% of patients. Only 3 instances of pain scores increased post use of methoxyflurane (all other entries showed pain score < initial rating). Patients & staff noted obvious analgesic response within 3 minutes (n= 96, 81%, of administration). Nurses documented a change in patient vital signs for 4 of the 15 patient-related concerns; the remaining concerns were due to “gagging” on the taste, or “having a coughing episode”; one patient tried to leave the department before the procedure was attended (very euphoric state). Upon review of the staff concerns – no adverse events occurred & return to therapeutic vitals occurred within 10 minutes. Length of stay for patients was compared with similar presentations (such as dislocated shoulder or ankle fracture) & saw an average 40-minute decrease in time to discharge. Methoxyflurane treatment was rated “positively” by > 80% of patients – with remaining feedback related to mild & transient concerns. Staff similarly noted a positive response to methoxyflurane as an analgesic & as an added tool for frontline analgesic purposes. Conclusion: Methoxyflurane should be used on suitable patient presentations requiring immediate, short term pain relief. As a highly portable, non-narcotic avenue to treat pain this study showed obvious therapeutic benefit, positive feedback, & a shorter length of stay in the ED. By partnering with key stake holders, this study determined methoxyflurane use decreased work load, decreased wait time to analgesia, and increased patient satisfaction.Keywords: analgesia, benefits, emergency, methoxyflurane
Procedia PDF Downloads 12514702 Investigation of Factors Affecting the Total Ionizing Dose Threshold of Electrically Erasable Read Only Memories for Use in Dose Rate Measurement
Authors: Liqian Li, Yu Liu, Karen Colins
Abstract:
The dose rate present in a seriously contaminated area can be indirectly determined by monitoring radiation damage to inexpensive commercial electronics, instead of deploying expensive radiation hardened sensors. EEPROMs (Electrically Erasable Read Only Memories) are a good candidate for this purpose because they are inexpensive and are sensitive to radiation exposure. When the total ionizing dose threshold is reached, an EEPROM chip will show signs of damage that can be monitored and transmitted by less susceptible electronics. The dose rate can then be determined from the known threshold dose and the exposure time, assuming the radiation field remains constant with time. Therefore, the threshold dose needs to be well understood before this method can be used. There are many factors affecting the threshold dose, such as the gamma ray energy spectrum, the operating voltage, etc. The purpose of this study was to experimentally determine how the threshold dose depends on dose rate, temperature, voltage, and duty factor. It was found that the duty factor has the strongest effect on the total ionizing dose threshold, while the effect of the other three factors that were investigated is less significant. The effect of temperature was found to be opposite to that expected to result from annealing and is yet to be understood.Keywords: EEPROM, ionizing radiation, radiation effects on electronics, total ionizing dose, wireless sensor networks
Procedia PDF Downloads 18714701 Exceptional Cost and Time Optimization with Successful Leak Repair and Restoration of Oil Production: West Kuwait Case Study
Authors: Nasser Al-Azmi, Al-Sabea Salem, Abu-Eida Abdullah, Milan Patra, Mohamed Elyas, Daniel Freile, Larisa Tagarieva
Abstract:
Well intervention was done along with Production Logging Tools (PLT) to detect sources of water, and to check well integrity for two West Kuwait oil wells started to produce 100 % water. For the first well, to detect the source of water, PLT was performed to check the perforations, no production observed from the bottom two perforation intervals, and an intake of water was observed from the top most perforation. Then a decision was taken to extend the PLT survey from tag depth to the Y-tool. For the second well, the aim was to detect the source of water and if there was a leak in the 7’’liner in front of the upper zones. Data could not be recorded in flowing conditions due to the casing deformation at almost 8300 ft. For the first well from the interpretation of PLT and well integrity data, there was a hole in the 9 5/8'' casing from 8468 ft to 8494 ft producing almost the majority of water, which is 2478 bbl/d. The upper perforation from 10812 ft to 10854 ft was taking 534 stb/d. For the second well, there was a hole in the 7’’liner from 8303 ft MD to 8324 ft MD producing 8334.0 stb/d of water with an intake zone from10322.9-10380.8 ft MD taking the whole fluid. To restore the oil production, W/O rig was mobilized to prevent dump flooding, and during the W/O, the leaking interval was confirmed for both wells. The leakage was cement squeezed and tested at 900-psi positive pressure and 500-psi drawdown pressure. The cement squeeze job was successful. After W/O, the wells kept producing for cleaning, and eventually, the WC reduced to 0%. Regular PLT and well integrity logs are required to study well performance, and well integrity issues, proper cement behind casing is essential to well longevity and well integrity, and the presence of the Y-tool is essential as monitoring of well parameters and ESP to facilitate well intervention tasks. Cost and time optimization in oil and gas and especially during rig operations is crucial. PLT data quality and the accuracy of the interpretations contributed a lot to identify the leakage interval accurately and, in turn, saved a lot of time and reduced the repair cost with almost 35 to 45 %. The added value here was more related to the cost reduction and effective and quick proper decision making based on the economic environment.Keywords: leak, water shut-off, cement, water leak
Procedia PDF Downloads 11914700 Demulsification of Oil from Produced water Using Fibrous Coalescer
Authors: Nutcha Thianbut
Abstract:
In the petroleum drilling industry, besides oil and gas, water is also produced from petroleum production. which will have oil droplets dispersed in the water as an emulsion. Commonly referred to as produced water, most industrial water-based produced water methods use the method of pumping water back into wells or catchment areas. because it cannot be utilized further, but in the compression of water each time, the cost is quite high. And the survey found that the amount of water from the petroleum production process has increased every year. In this research, we would like to study the removal of oil in produced water by the Coalescer device using fibers from agricultural waste as an intermediary. As an alternative to reduce the cost of water management in the petroleum drilling industry. The objectives of this research are 1. To study the fiber pretreatment by chemical process for the efficiency of oil-water separation 2. To study and design the fiber-packed coalescer device to destroy the emulsion of crude oil in water. 3. To study the working conditions of coalescer devices in emulsion destruction. using a fiber medium. In this research, the experiment was divided into two parts. The first part will study the absorbency of fibers. It compares untreated fibers with chemically treated alkaline fibers that change over time as well as adjusting the amount of fiber on the absorbency of the fiber and the second part will study the separation of oil from produced water by Coalescer equipment using fiber as medium to study the optimum condition of coalescer equipment for further development and industrial application.Keywords: produced water, fiber, surface modification, coalescer
Procedia PDF Downloads 169