Search results for: SPD process
10959 Using the Yield-SAFE Model to Assess the Impacts of Climate Change on Yield of Coffee (Coffea arabica L.) Under Agroforestry and Monoculture Systems
Authors: Tesfay Gidey Bezabeh, Tânia Sofia Oliveira, Josep Crous-Duran, João H. N. Palma
Abstract:
Ethiopia's economy depends strongly on Coffea arabica production. Coffee, like many other crops, is sensitive to climate change. An urgent development and application of strategies against the negative impacts of climate change on coffee production is important. Agroforestry-based system is one of the strategies that may ensure sustainable coffee production amidst the likelihood of future impacts of climate change. This system involves the combination of trees in buffer extremes, thereby modifying microclimate conditions. This paper assessed coffee production under 1) coffee monoculture and 2) coffee grown using an agroforestry system, under a) current climate and b) two different future climate change scenarios. The study focused on two representative coffee-growing regions of Ethiopia under different soil, climate, and elevation conditions. A process-based growth model (Yield-SAFE) was used to simulate coffee production for a time horizon of 40 years. Climate change scenarios considered were representative concentration pathways (RCP) 4.5 and 8.5. The results revealed that in monoculture systems, the current coffee yields are between 1200-1250 kg ha⁻¹ yr⁻¹, with an expected decrease between 4-38% and 20-60% in scenarios RCP 4.5 and 8.5, respectively. However, in agroforestry systems, the current yields are between 1600-2200 kg ha⁻¹ yr⁻¹; the decrease was lower, ranging between 4-13% and 16-25% in RCP 4.5 and 8.5 scenarios, respectively. From the results, it can be concluded that coffee production under agroforestry systems has a higher level of resilience when facing future climate change and reinforces the idea of using this type of management in the near future for adapting climate change's negative impacts on coffee production.Keywords: Albizia gummifera, CORDEX, Ethiopia, HADCM3 model, process-based model
Procedia PDF Downloads 11610958 Amazonian Native Biomass Residue for Sustainable Development of Isolated Communities
Authors: Bruna C. Brasileiro, José Alberto S. Sá, Brigida R. P. Rocha
Abstract:
The Amazon region development was related to large-scale projects associated with economic cycles. Economic cycles were originated from policies implemented by successive governments that exploited the resources and have not yet been able to improve the local population's quality of life. These implanted development strategies were based on vertical planning centered on State that didn’t know and showed no interest in know the local needs and potentialities. The future of this region is a challenge that depends on a model of development based on human progress associated to intelligent, selective and environmentally safe exploitation of natural resources settled in renewable and no-polluting energy generation sources – a differential factor of attraction of new investments in a context of global energy and environmental crisis. In this process the planning and support of Brazilian State, local government, and selective international partnership are essential. Residual biomass utilization allows the sustainable development by the integration of production chain and energy generation process which could improve employment condition and income of riversides. Therefore, this research discourses how the use of local residual biomass (açaí lumps) could be an important instrument of sustainable development for isolated communities located at Alcobaça Sustainable Development Reserve (SDR), Tucuruí, Pará State, since in this region the energy source more accessible for who can pay are the fossil fuels that reaches about 54% of final energy consumption by the integration between the açaí productive chain and the use of renewable energy source besides it can promote less environmental impact and decrease the use of fossil fuels and carbon dioxide emissions.Keywords: Amazon, biomass, renewable energy, sustainability
Procedia PDF Downloads 30210957 Navigating the Assessment Landscape in English Language Teaching: Strategies, Challengies and Best Practices
Authors: Saman Khairani
Abstract:
Assessment is a pivotal component of the teaching and learning process, serving as a critical tool for evaluating student progress, diagnosing learning needs, and informing instructional decisions. In the context of English Language Teaching (ELT), effective assessment practices are essential to promote meaningful learning experiences and foster continuous improvement in language proficiency. This paper delves into various assessment strategies, explores associated challenges, and highlights best practices for assessing student learning in ELT. The paper begins by examining the diverse forms of assessment, including formative assessments that provide timely feedback during the learning process and summative assessments that evaluate overall achievement. Additionally, alternative methods such as portfolios, self-assessment, and peer assessment play a significant role in capturing various aspects of language learning. Aligning assessments with learning objectives is crucial. Educators must ensure that assessment tasks reflect the desired language skills, communicative competence, and cultural awareness. Validity, reliability, and fairness are essential considerations in assessment design. Challenges in assessing language skills—such as speaking, listening, reading, and writing—are discussed, along with practical solutions. Constructive feedback, tailored to individual learners, guides their language development. In conclusion, this paper synthesizes research findings and practical insights, equipping ELT practitioners with the knowledge and tools necessary to design, implement, and evaluate effective assessment practices. By fostering meaningful learning experiences, educators contribute significantly to learners’ language proficiency and overall success.Keywords: ELT, formative, summative, fairness, validity, reliability
Procedia PDF Downloads 5410956 A Simple Chemical Approach to Regenerating Strength of Thermally Recycled Glass Fibre
Authors: Sairah Bashir, Liu Yang, John Liggat, James Thomason
Abstract:
Glass fibre is currently used as reinforcement in over 90% of all fibre-reinforced composites produced. The high rigidity and chemical resistance of these composites are required for optimum performance but unfortunately results in poor recyclability; when such materials are no longer fit for purpose, they are frequently deposited in landfill sites. Recycling technologies, for example, thermal treatment, can be employed to address this issue; temperatures typically between 450 and 600 °C are required to allow degradation of the rigid polymeric matrix and subsequent extraction of fibrous reinforcement. However, due to the severe thermal conditions utilised in the recycling procedure, glass fibres become too weak for reprocessing in second-life composite materials. In addition, more stringent legislation is being put in place regarding disposal of composite waste, and so it is becoming increasingly important to develop long-term recycling solutions for such materials. In particular, the development of a cost-effective method to regenerate strength of thermally recycled glass fibres will have a positive environmental effect as a reduced volume of composite material will be destined for landfill. This research study has demonstrated the positive impact of sodium hydroxide (NaOH) and potassium hydroxide (KOH) solution, prepared at relatively mild temperatures and at concentrations of 1.5 M and above, on the strength of heat-treated glass fibres. As a result, alkaline treatments can potentially be implemented to glass fibres that are recycled from composite waste to allow their reuse in second-life materials. The optimisation of the strength recovery process is being conducted by varying certain reaction parameters such as molarity of alkaline solution and treatment time. It is believed that deep V-shaped surface flaws exist commonly on severely damaged fibre surfaces and are effectively removed to form smooth, U-shaped structures following alkaline treatment. Although these surface flaws are believed to be present on glass fibres they have not in fact been observed, however, they have recently been discovered in this research investigation through analytical techniques such as AFM (atomic force microscopy) and SEM (scanning electron microscopy). Reaction conditions such as molarity of alkaline solution affect the degree of etching of the glass fibre surface, and therefore the extent to which fibre strength is recovered. A novel method in determining the etching rate of glass fibres after alkaline treatment has been developed, and the data acquired can be correlated with strength. By varying reaction conditions such as alkaline solution temperature and molarity, the activation energy of the glass etching process and the reaction order can be calculated respectively. The promising results obtained from NaOH and KOH treatments have opened an exciting route to strength regeneration of thermally recycled glass fibres, and the optimisation of the alkaline treatment process is being continued in order to produce recycled fibres with properties that match original glass fibre products. The reuse of such glass filaments indicates that closed-loop recycling of glass fibre reinforced composite (GFRC) waste can be achieved. In fact, the development of a closed-loop recycling process for GFRC waste is already underway in this research study.Keywords: glass fibers, glass strengthening, glass structure and properties, surface reactions and corrosion
Procedia PDF Downloads 25410955 The Application of Enzymes on Pharmaceutical Products and Process Development
Authors: Reginald Anyanwu
Abstract:
Enzymes are biological molecules that significantly regulate the rate of almost all of the chemical reactions that take place within cells, and have been widely used for products’ innovations. They are vital for life and serve a wide range of important functions in the body, such as aiding in digestion and metabolism. The present study was aimed at finding out the extent to which biological molecules have been utilized by pharmaceutical, food and beverage, and biofuel industries in commercial and scale up applications. Taking into account the escalating business opportunities in this vertical, biotech firms have also been penetrating enzymes industry especially that of food. The aim of the study therefore was to find out how biocatalysis can be successfully deployed; how enzyme application can improve industrial processes. To achieve the purpose of the study, the researcher focused on the analytical tools that are critical for the scale up implementation of enzyme immobilization to ascertain the extent of increased product yield at minimum logistical burden and maximum market profitability on the environment and user. The researcher collected data from four pharmaceutical companies located at Anambra state and Imo state of Nigeria. Questionnaire items were distributed to these companies. The researcher equally made a personal observation on the applicability of these biological molecules on innovative Products since there is now shifting trends toward the consumption of healthy and quality food. In conclusion, it was discovered that enzymes have been widely used for products’ innovations but there are however variations on their applications. It was also found out that pivotal contenders of enzymes market have lately been making heavy investments in the development of innovative product solutions. It was recommended that the applications of enzymes on innovative products should be widely practiced.Keywords: enzymes, pharmaceuticals, process development, quality food consumption, scale-up applications
Procedia PDF Downloads 13810954 Effect of Naphtha in Addition to a Cycle Steam Stimulation Process Reducing the Heavy Oil Viscosity Using a Two-Level Factorial Design
Authors: Nora A. Guerrero, Adan Leon, María I. Sandoval, Romel Perez, Samuel Munoz
Abstract:
The addition of solvents in cyclic steam stimulation is a technique that has shown an impact on the improved recovery of heavy oils. In this technique, it is possible to reduce the steam/oil ratio in the last stages of the process, at which time this ratio increases significantly. The mobility of improved crude oil increases due to the structural changes of its components, which at the same time reflected in the decrease in density and viscosity. In the present work, the effect of the variables such as temperature, time, and weight percentage of naphtha was evaluated, using a factorial design of experiments 23. From the results of analysis of variance (ANOVA) and Pareto diagram, it was possible to identify the effect on viscosity reduction. The experimental representation of the crude-vapor-naphtha interaction was carried out in a batch reactor on a Colombian heavy oil of 12.8° API and 3500 cP. The conditions of temperature, reaction time, and percentage of naphtha were 270-300 °C, 48-66 hours, and 3-9% by weight, respectively. The results showed a decrease in density with values in the range of 0.9542 to 0.9414 g/cm³, while the viscosity decrease was in the order of 55 to 70%. On the other hand, simulated distillation results, according to ASTM 7169, revealed significant conversions of the 315°C+ fraction. From the spectroscopic techniques of nuclear magnetic resonance NMR, infrared FTIR and UV-VIS visible ultraviolet, it was determined that the increase in the performance of the light fractions in the improved crude is due to the breakdown of alkyl chains. The methodology for cyclic steam injection with naphtha and laboratory-scale characterization can be considered as a practical tool in improved recovery processes.Keywords: viscosity reduction, cyclic steam stimulation, factorial design, naphtha
Procedia PDF Downloads 17310953 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem
Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães
Abstract:
This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart
Procedia PDF Downloads 16510952 Impact of Map Generalization in Spatial Analysis
Authors: Lin Li, P. G. R. N. I. Pussella
Abstract:
When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.Keywords: generalization, GIS, scales, spatial analysis
Procedia PDF Downloads 32710951 An In-Depth Experimental Study of Wax Deposition in Pipelines
Authors: Arias M. L., D’Adamo J., Novosad M. N., Raffo P. A., Burbridge H. P., Artana G.
Abstract:
Shale oils are highly paraffinic and, consequently, can create wax deposits that foul pipelines during transportation. Several factors must be considered when designing pipelines or treatment programs that prevents wax deposition: including chemical species in crude oils, flowrates, pipes diameters and temperature. This paper describes the wax deposition study carried out within the framework of Y-TEC's flow assurance projects, as part of the process to achieve a better understanding on wax deposition issues. Laboratory experiments were performed on a medium size, 1 inch diameter, wax deposition loop of 15 mts long equipped with a solid detector system, online microscope to visualize crystals, temperature and pressure sensors along the loop pipe. A baseline test was performed with diesel with no paraffin or additive content. Tests were undertaken with different temperatures of circulating and cooling fluid at different flow conditions. Then, a solution formed with a paraffin added to the diesel was considered. Tests varying flowrate and cooling rate were again run. Viscosity, density, WAT (Wax Appearance Temperature) with DSC (Differential Scanning Calorimetry), pour point and cold finger measurements were carried out to determine physical properties of the working fluids. The results obtained in the loop were analyzed through momentum balance and heat transfer models. To determine possible paraffin deposition scenarios temperature and pressure loop output signals were studied. They were compared with WAT static laboratory methods. Finally, we scrutinized the effect of adding a chemical inhibitor to the working fluid on the dynamics of the process of wax deposition in the loop.Keywords: paraffin desposition, flow assurance, chemical inhibitors, flow loop
Procedia PDF Downloads 10310950 Energy Production with Closed Methods
Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani
Abstract:
In Kosovo, the problem with the electricity supply is huge and does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime - Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product is obtained - gas, which passes through the carburetor, which enables the gas combustion process and puts into operation the internal combustion machine and the generator and produces electricity that does not release gases into the atmosphere. The obtained results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that: in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.Keywords: energy, heating, atmosphere, waste, gasification
Procedia PDF Downloads 23410949 Lead Removal From Ex- Mining Pond Water by Electrocoagulation: Kinetics, Isotherm, and Dynamic Studies
Authors: Kalu Uka Orji, Nasiman Sapari, Khamaruzaman W. Yusof
Abstract:
Exposure of galena (PbS), tealite (PbSnS2), and other associated minerals during mining activities release lead (Pb) and other heavy metals into the mining water through oxidation and dissolution. Heavy metal pollution has become an environmental challenge. Lead, for instance, can cause toxic effects to human health, including brain damage. Ex-mining pond water was reported to contain lead as high as 69.46 mg/L. Conventional treatment does not easily remove lead from water. A promising and emerging treatment technology for lead removal is the application of the electrocoagulation (EC) process. However, some of the problems associated with EC are systematic reactor design, selection of maximum EC operating parameters, scale-up, among others. This study investigated an EC process for the removal of lead from synthetic ex-mining pond water using a batch reactor and Fe electrodes. The effects of various operating parameters on lead removal efficiency were examined. The results obtained indicated that the maximum removal efficiency of 98.6% was achieved at an initial PH of 9, the current density of 15mA/cm2, electrode spacing of 0.3cm, treatment time of 60 minutes, Liquid Motion of Magnetic Stirring (LM-MS), and electrode arrangement = BP-S. The above experimental data were further modeled and optimized using a 2-Level 4-Factor Full Factorial design, a Response Surface Methodology (RSM). The four factors optimized were the current density, electrode spacing, electrode arrangements, and Liquid Motion Driving Mode (LM). Based on the regression model and the analysis of variance (ANOVA) at 0.01%, the results showed that an increase in current density and LM-MS increased the removal efficiency while the reverse was the case for electrode spacing. The model predicted the optimal lead removal efficiency of 99.962% with an electrode spacing of 0.38 cm alongside others. Applying the predicted parameters, the lead removal efficiency of 100% was actualized. The electrode and energy consumptions were 0.192kg/m3 and 2.56 kWh/m3 respectively. Meanwhile, the adsorption kinetic studies indicated that the overall lead adsorption system belongs to the pseudo-second-order kinetic model. The adsorption dynamics were also random, spontaneous, and endothermic. The higher temperature of the process enhances adsorption capacity. Furthermore, the adsorption isotherm fitted the Freundlish model more than the Langmuir model; describing the adsorption on a heterogeneous surface and showed good adsorption efficiency by the Fe electrodes. Adsorption of Pb2+ onto the Fe electrodes was a complex reaction, involving more than one mechanism. The overall results proved that EC is an efficient technique for lead removal from synthetic mining pond water. The findings of this study would have application in the scale-up of EC reactor and in the design of water treatment plants for feed-water sources that contain lead using the electrocoagulation method.Keywords: ex-mining water, electrocoagulation, lead, adsorption kinetics
Procedia PDF Downloads 14710948 Machinability Analysis in Drilling Flax Fiber-Reinforced Polylactic Acid Bio-Composite Laminates
Authors: Amirhossein Lotfi, Huaizhong Li, Dzung Viet Dao
Abstract:
Interest in natural fiber-reinforced composites (NFRC) is progressively growing both in terms of academia research and industrial applications thanks to their abundant advantages such as low cost, biodegradability, eco-friendly nature and relatively good mechanical properties. However, their widespread use is still presumed as challenging because of the specificity of their non-homogeneous structure, limited knowledge on their machinability characteristics and parameter settings, to avoid defects associated with the machining process. The present work is aimed to investigate the effect of the cutting tool geometry and material on the drilling-induced delamination, thrust force and hole quality produced when drilling a fully biodegradable flax/poly (lactic acid) composite laminate. Three drills with different geometries and material were used at different drilling conditions to evaluate the machinability of the fabricated composites. The experimental results indicated that the choice of cutting tool, in terms of material and geometry, has a noticeable influence on the cutting thrust force and subsequently drilling-induced damages. The lower value of thrust force and better hole quality was observed using high-speed steel (HSS) drill, whereas Carbide drill (with point angle of 130o) resulted in the highest value of thrust force. Carbide drill presented higher wear resistance and stability in variation of thrust force with a number of holes drilled, while HSS drill showed the lower value of thrust force during the drilling process. Finally, within the selected cutting range, the delamination damage increased noticeably with feed rate and moderately with spindle speed.Keywords: natural fiber reinforced composites, delamination, thrust force, machinability
Procedia PDF Downloads 12710947 Deasphalting of Crude Oil by Extraction Method
Authors: A. N. Kurbanova, G. K. Sugurbekova, N. K. Akhmetov
Abstract:
The asphaltenes are heavy fraction of crude oil. Asphaltenes on oilfield is known for its ability to plug wells, surface equipment and pores of the geologic formations. The present research is devoted to the deasphalting of crude oil as the initial stage refining oil. Solvent deasphalting was conducted by extraction with organic solvents (cyclohexane, carbon tetrachloride, chloroform). Analysis of availability of metals was conducted by ICP-MS and spectral feature at deasphalting was achieved by FTIR. High contents of asphaltenes in crude oil reduce the efficiency of refining processes. Moreover, high distribution heteroatoms (e.g., S, N) were also suggested in asphaltenes cause some problems: environmental pollution, corrosion and poisoning of the catalyst. The main objective of this work is to study the effect of deasphalting process crude oil to improve its properties and improving the efficiency of recycling processes. Experiments of solvent extraction are using organic solvents held in the crude oil JSC “Pavlodar Oil Chemistry Refinery. Experimental results show that deasphalting process also leads to decrease Ni, V in the composition of the oil. One solution to the problem of cleaning oils from metals, hydrogen sulfide and mercaptan is absorption with chemical reagents directly in oil residue and production due to the fact that asphalt and resinous substance degrade operational properties of oils and reduce the effectiveness of selective refining of oils. Deasphalting of crude oil is necessary to separate the light fraction from heavy metallic asphaltenes part of crude oil. For this oil is pretreated deasphalting, because asphaltenes tend to form coke or consume large quantities of hydrogen. Removing asphaltenes leads to partly demetallization, i.e. for removal of asphaltenes V/Ni and organic compounds with heteroatoms. Intramolecular complexes are relatively well researched on the example of porphyinous complex (VO2) and nickel (Ni). As a result of studies of V/Ni by ICP MS method were determined the effect of different solvents-deasphalting – on the process of extracting metals on deasphalting stage and select the best organic solvent. Thus, as the best DAO proved cyclohexane (C6H12), which as a result of ICP MS retrieves V-51.2%, Ni-66.4%? Also in this paper presents the results of a study of physical and chemical properties and spectral characteristics of oil on FTIR with a view to establishing its hydrocarbon composition. Obtained by using IR-spectroscopy method information about the specifics of the whole oil give provisional physical, chemical characteristics. They can be useful in the consideration of issues of origin and geochemical conditions of accumulation of oil, as well as some technological challenges. Systematic analysis carried out in this study; improve our understanding of the stability mechanism of asphaltenes. The role of deasphalted crude oil fractions on the stability asphaltene is described.Keywords: asphaltenes, deasphalting, extraction, vanadium, nickel, metalloporphyrins, ICP-MS, IR spectroscopy
Procedia PDF Downloads 24110946 Developing a Cultural Policy Framework for Small Towns and Cities
Authors: Raymond Ndhlovu, Jen Snowball
Abstract:
It has long been known that the Cultural and Creative Industries (CCIs) have the potential to aid in physical, social and economic renewal and regeneration of towns and cities, hence their importance when dealing with regional development. The CCIs can act as a catalyst for activity and investment in an area because the ‘consumption’ of cultural activities will lead to the activities and use of other non-cultural activities, for example, hospitality development including restaurants and bars, as well as public transport. ‘Consumption’ of cultural activities also leads to employment creation, and diversification. However, CCIs tend to be clustered, especially around large cities. There is, moreover, a case for development of CCIs around smaller towns and cities, because they do not rely on high technology inputs, and long supply chains, and, their direct link to rural and isolated places makes them vital in regional development. However, there is currently little research on how to craft cultural policy for regions with smaller towns and cities. Using the Sarah Baartman District (SBDM) in South Africa as an example, this paper describes the process of developing cultural policy for a region that has potential, and existing, cultural clusters, but currently no one, coherent policy relating to CCI development. The SBDM was chosen as a case study because it has no large cities, but has some CCI clusters, and has identified them as potential drivers of local economic development. The process of developing cultural policy is discussed in stages: Identification of what resources are present; including human resources, soft and hard infrastructure; Identification of clusters; Analysis of CCI labour markets and ownership patterns; Opportunities and challenges from the point of view of CCIs and other key stakeholders; Alignment of regional policy aims with provincial and national policy objectives; and finally, design and implementation of a regional cultural policy.Keywords: cultural and creative industries, economic impact, intrinsic value, regional development
Procedia PDF Downloads 23210945 Detection of the Effectiveness of Training Courses and Their Limitations Using CIPP Model (Case Study: Isfahan Oil Refinery)
Authors: Neda Zamani
Abstract:
The present study aimed to investigate the effectiveness of training courses and their limitations using the CIPP model. The investigations were done on Isfahan Refinery as a case study. From a purpose point of view, the present paper is included among applied research and from a data gathering point of view, it is included among descriptive research of the field type survey. The population of the study included participants in training courses, their supervisors and experts of the training department. Probability-proportional-to-size (PPS) was used as the sampling method. The sample size for participants in training courses included 195 individuals, 30 supervisors and 11 individuals from the training experts’ group. To collect data, a questionnaire designed by the researcher and a semi-structured interview was used. The content validity of the data was confirmed by training management experts and the reliability was calculated through 0.92 Cronbach’s alpha. To analyze the data in descriptive statistics aspect (tables, frequency, frequency percentage and mean) were applied, and inferential statistics (Mann Whitney and Wilcoxon tests, Kruskal-Wallis test to determine the significance of the opinion of the groups) have been applied. Results of the study indicated that all groups, i.e., participants, supervisors and training experts, absolutely believe in the importance of training courses; however, participants in training courses regard content, teacher, atmosphere and facilities, training process, managing process and product as to be in a relatively appropriate level. The supervisors also regard output to be at a relatively appropriate level, but training experts regard content, teacher and managing processes as to be in an appropriate and higher than average level.Keywords: training courses, limitations of training effectiveness, CIPP model, Isfahan oil refinery company
Procedia PDF Downloads 7410944 Rheological Assessment of Oil Well Cement Paste Dosed with Cellulose Nanocrystal (CNC)
Authors: Mohammad Reza Dousti, Yaman Boluk, Vivek Bindiganavile
Abstract:
During the past few decades, oil and natural gas consumption have increased significantly. The limited amount of hydrocarbon resources on earth has led to a stronger desire towards efficient drilling, well completion and extracting, with the least time, energy and money wasted. Well cementing is one of the most crucial and important steps in any well completion, to fill the annulus between the casing string and the well bore. However, since it takes place at the end of the drilling process, a satisfying and acceptable job is rarely done. Hence, a large and significant amount of time and energy is then spent in order to do the required corrections or retrofitting the well in some cases. Oil well cement paste needs to be pumped during the cementing process, therefore the rheological and flow behavior of the paste is of great importance. This study examines the use of innovative cellulose-based nanomaterials on the flow properties of the resulting cementitious system. The cementitious paste developed in this research is composed of water, class G oil well cement, bentonite and cellulose nanocrystals (CNC). Bentonite is used as a cross contamination component. Initially, the influence of CNC on the flow and rheological behavior of CNC and bentonite suspensions was assessed. Furthermore, the rheological behavior of oil well cement pastes dosed with CNC was studied using a steady shear parallel-plate rheometer and the results were compared to the rheological behavior of a neat oil well cement paste with no CNC. The parameters assessed were the yield shear stress and the viscosity. Significant changes in yield shear stress and viscosity were observed due to the addition of the CNC. Based on the findings in this study, the addition of a very small dosage of CNC to the oil well cement paste results in a more viscous cement slurry with a higher yield stress, demonstrating a shear thinning behavior.Keywords: cellulose nanocrystal, flow behavior, oil well cement, rheology
Procedia PDF Downloads 22910943 An Exploratory Sequential Design: A Mixed Methods Model for the Statistics Learning Assessment with a Bayesian Network Representation
Authors: Zhidong Zhang
Abstract:
This study established a mixed method model in assessing statistics learning with Bayesian network models. There are three variants in exploratory sequential designs. There are three linked steps in one of the designs: qualitative data collection and analysis, quantitative measure, instrument, intervention, and quantitative data collection analysis. The study used a scoring model of analysis of variance (ANOVA) as a content domain. The research study is to examine students’ learning in both semantic and performance aspects at fine grain level. The ANOVA score model, y = α+ βx1 + γx1+ ε, as a cognitive task to collect data during the student learning process. When the learning processes were decomposed into multiple steps in both semantic and performance aspects, a hierarchical Bayesian network was established. This is a theory-driven process. The hierarchical structure was gained based on qualitative cognitive analysis. The data from students’ ANOVA score model learning was used to give evidence to the hierarchical Bayesian network model from the evidential variables. Finally, the assessment results of students’ ANOVA score model learning were reported. Briefly, this was a mixed method research design applied to statistics learning assessment. The mixed methods designs expanded more possibilities for researchers to establish advanced quantitative models initially with a theory-driven qualitative mode.Keywords: exploratory sequential design, ANOVA score model, Bayesian network model, mixed methods research design, cognitive analysis
Procedia PDF Downloads 17710942 MFCA: An Environmental Management Accounting Technique for Optimal Resource Efficiency in Production Processes
Authors: Omolola A. Tajelawi, Hari L. Garbharran
Abstract:
Revenue leakages are one of the major challenges manufacturers face in production processes, as most of the input materials that should emanate as products from the lines are lost as waste. Rather than generating income from material input which is meant to end-up as products, losses are further incurred as costs in order to manage waste generated. In addition, due to the lack of a clear view of the flow of resources on the lines from input to output stage, acquiring information on the true cost of waste generated have become a challenge. This has therefore given birth to the conceptualization and implementation of waste minimization strategies by several manufacturing industries. This paper reviews the principles and applications of three environmental management accounting tools namely Activity-based Costing (ABC), Life-Cycle Assessment (LCA) and Material Flow Cost Accounting (MFCA) in the manufacturing industry and their effectiveness in curbing revenue leakages. The paper unveils the strengths and limitations of each of the tools; beaming a searchlight on the tool that could allow for optimal resource utilization, transparency in production process as well as improved cost efficiency. Findings from this review reveal that MFCA may offer superior advantages with regards to the provision of more detailed information (both in physical and monetary terms) on the flow of material inputs throughout the production process compared to the other environmental accounting tools. This paper therefore makes a case for the adoption of MFCA as a viable technique for the identification and reduction of waste in production processes, and also for effective decision making by production managers, financial advisors and other relevant stakeholders.Keywords: MFCA, environmental management accounting, resource efficiency, waste reduction, revenue losses
Procedia PDF Downloads 33510941 Investigation and Comprehensive Benefit Analysis of 11 Typical Polar-Based Agroforestry Models Based on Analytic Hierarchy Process in Anhui Province, Eastern China
Authors: Zhihua Cao, Hongfei Zhao, Zhongneng Wu
Abstract:
The development of polar-based agroforestry was necessary due to the influence of the timber market environment in China, which can promote the coordinated development of forestry and agriculture, and gain remarkable ecological, economic and social benefits. The main agroforestry models of the main poplar planting area in Huaibei plain and along the Yangtze River plain were carried out. 11 typical management models of poplar were selected to sum up: pure poplar forest, poplar-rape-soybean, poplar-wheat-soybean, poplar-rape-cotton, poplar-wheat, poplar-chicken, poplar-duck, poplar-sheep, poplar-Agaricus blazei, poplar-oil peony, poplar-fish, represented by M0-M10, respectively. 12 indexes related with economic, ecological and social benefits (annual average cost, net income, ratio of output to investment, payback period of investment, land utilization ratio, utilization ratio of light energy, improvement and system stability of ecological and production environment, product richness, labor capacity, cultural quality of labor force, sustainability) were screened out to carry on the comprehensive evaluation and analysis to 11 kinds of typical agroforestry models based on analytic hierarchy process (AHP). The results showed that the economic benefit of each agroforestry model was in the order of: M8 > M6 > M9 > M7 > M5 > M10 > M4 > M1 > M2 > M3 > M0. The economic benefit of poplar-A. blazei model was the highest (332, 800 RMB / hm²), followed by poplar-duck and poplar-oil peony model (109, 820RMB /hm², 5, 7226 RMB /hm²). The order of comprehensive benefit was: M8 > M4 > M9 > M6 > M1 > M2 > M3 > M7 > M5 > M10 > M0. The economic benefit and comprehensive benefit of each agroforestry model were higher than that of pure poplar forest. The comprehensive benefit of poplar-A. blazei model was the highest, and that of poplar-wheat model ranked second, while its economic benefit was not high. Next were poplar-oil peony and poplar-duck models. It was suggested that the model of poplar-wheat should be adopted in the plain along the Yangtze River, and the whole cycle mode of poplar-grain, popalr-A. blazei, or poplar-oil peony should be adopted in Huaibei plain, northern Anhui. Furthermore, wheat, rape, and soybean are the main crops before the stand was closed; the agroforestry model of edible fungus or Chinese herbal medicine can be carried out when the stand was closed in order to maximize the comprehensive benefit. The purpose of this paper is to provide a reference for forest farmers in the selection of poplar agroforestry model in the future and to provide the basic data for the sustainable and efficient study of poplar agroforestry in Anhui province, eastern China.Keywords: agroforestry, analytic hierarchy process (AHP), comprehensive benefit, model, poplar
Procedia PDF Downloads 16310940 Integrating Efficient Anammox with Enhanced Biological Phosphorus Removal Process Through Flocs Management for Sustainable Ultra-deep Nutrients Removal from Municipal Wastewater
Authors: Qiongpeng Dan, Xiyao Li, Qiong Zhang, Yongzhen Peng
Abstract:
The nutrients removal from wastewater is of great significance for global wastewater recycling and sustainable reuse. Traditional nitrogen and phosphorus removal processes are very dependent on the input of aeration and carbon sources, which makes it difficult to meet the low-carbon goal of energy saving and emission reduction. This study reported a proof-of-concept demonstration of integrating anammox and enhanced biological phosphorus removal (EBPR) by flocs management in a single-stage hybrid bioreactor (biofilms and flocs) for simultaneous nitrogen and phosphorus removal (SNPR). Excellent removal efficiencies of nitrogen (97.7±1.3%) and phosphorus (97.4±0.7%) were obtained in low C/N ratio (3.0±0.5) municipal wastewater treatment. Interestingly, with the loss of flocs, anammox bacteria (Ca. Brocadia) was highly enriched in biofilms, with relative and absolute abundances reaching up to 12.5% and 8.3×1010 copies/g dry sludge, respectively. The anammox contribution to nitrogen removal also rose from 32.6±9.8% to 53.4±4.2%. Endogenous denitrification by flocs was proven to be the main contributor to both nitrite and nitrate reduction, and flocs loss significantly promoted nitrite flow towards anammox, facilitating AnAOB enrichment. Moreover, controlling the floc's solid retention time at around 8 days could maintain a low poly-phosphorus level of 0.02±0.001 mg P/mg VSS in the flocs, effectively addressing the additional phosphorus removal burden imposed by the enrichment of phosphorus-accumulating organisms in biofilms. This study provides an update on developing a simple and feasible strategy for integrating anammox and EBPR for SNPR in mainstream municipal wastewater.Keywords: anammox process, enhanced biological phosphorus removal, municipal wastewater, sustainable nutrients removal
Procedia PDF Downloads 5010939 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 1810938 The Financial and Metallurgical Benefits of Niobium Grain Refined As-Rolled 460 MPa H-Beam to the Construction Industry in SE Asia
Authors: Michael Wright, Tiago Costa
Abstract:
The construction industry in SE Asia has been relying on S355 MPa “as rolled” H-beams for many years now. It is an easily sourced, metallurgically simple, reliable product that all designers, fabricators and constructors are familiar with. However, as the Global demand to better use our finite resources gets stronger, the need for an as-rolled S460 MPa H-Beam is becoming more apparent. The Financial benefits of an “as-rolled” S460 MPa H-beam are obvious. The S460 MPa beam which is currently available and used is fabricated from rolled strip. However, making H-beam from 3 x 460 MPa strips requires costly equipment, valuable welding skills & production time, all of which can be in short supply or better used for other purposes. The Metallurgical benefits of an “as-rolled” S460 MPa H-beam are consistency in the product. Fabricated H-beams have inhomogeneous areas where the strips have been welded together - parent metal, heat affected zone and weld metal all in the one body. They also rely heavily on the skill of the welder to guarantee a perfect, defect free weld. If this does not occur, the beam is intrinsically flawed and could lead to failure in service. An as-rolled beam is a relatively homogenous product, with the optimum strength and ductility produced by delivering steel with as fine as possible uniform cross sectional grain size. This is done by cost effective alloy design coupled with proper metallurgical process control implemented into an existing mill’s equipment capability and layout. This paper is designed to highlight the benefits of bring an “as-rolled” S460 MPa H-beam to the construction market place in SE Asia, and hopefully encourage the current “as-rolled” H-beam producers to rise to the challenge and produce an innovative high quality product for the local market.Keywords: fine grained, As-rolled, long products, process control, metallurgy
Procedia PDF Downloads 29910937 The Implementation of Human Resource Information System in the Public Sector: An Exploratory Study of Perceived Benefits and Challenges
Authors: Aneeqa Suhail, Shabana Naveed
Abstract:
The public sector (in both developed and developing countries) has gone through various waves of radical reforms in recent decades. In Pakistan, under the influence of New Public Management(NPM) Reforms; best practices of private sector are introduced in the public sector to modernize public organizations. Human Resource Information System (HRIS) has been popular in the private sector and proven to be a successful system, therefore it is being adopted in the public sector too. However, implementation of private business practices in public organizations us very challenging due to differences in context. This implementation gets further critical in Pakistan due to a centralizing tendency and lack of autonomy in public organizations. Adoption of HRIS by public organizations in Pakistan raises several questions: What challenges are faced by public organizations in implementation of HRIS? Are benefits of HRIS such as efficiency, process integration and cost reduction achieved? How is the previous system improved with this change and what are the impacts? Yet, it is an under-researched topic, especially in public enterprises. This study contributes to the existing body of knowledge by empirically exploring benefits and challenges of implementation of HRIS in public organizations. The research adopts a case study approach and uses qualitative data based on in-depth interviews conducted at various levels in the hierarchy including top management, departmental heads and employees. The unit of analysis is LESCO, the Lahore Electric Supply Company, a state-owned entity that generates, transmits and distributes electricity to 4 big cities in Punjab, Pakistan. The findings of the study show that LESCO has not achieved the benefits of HRIS as established in literature. The implementation process remained quite slow and costly. Various functions of HR are still in isolation and integration is a big challenge for the organization. Although the data is automated, the previous system of manually record maintenance and paperwork is still in work, resulting in the presence of parallel practices. The findings also identified resistance to change from top management and labor workforce, lack of commitment and technical knowledge, and costly vendors as major barriers that affect the effective implementation of HRIS. The paper suggests some potential actions to overcome these barriers and to enhance effective implementation of HR-technology. The findings are explained in light of an institutional logics perspective. HRIS’ new logic of automated and integrated HR system is in sharp contrast with the prevailing logic of process-oriented manual data maintenance, leading to resistance to change and deadlock.Keywords: human resource information system, technological changes, state-owned enterprise, implementation challenges
Procedia PDF Downloads 14410936 Redesigning Clinical and Nursing Informatics Capstones
Authors: Sue S. Feldman
Abstract:
As clinical and nursing informatics mature, an area that has gotten a lot of attention is the value capstone projects. Capstones are meant to address authentic and complex domain-specific problems. While capstone projects have not always been essential in graduate clinical and nursing informatics education, employers are wanting to see evidence of the prospective employee's knowledge and skills as an indication of employability. Capstones can be organized in many ways: a single course over a single semester, multiple courses over multiple semesters, as a targeted demonstration of skills, as a synthesis of prior knowledge and skills, mentored by one single person or mentored by various people, submitted as an assignment or presented in front of a panel. Because of the potential for capstones to enhance the educational experience, and as a mechanism for application of knowledge and demonstration of skills, a rigorous capstone can accelerate a graduate's potential in the workforce. In 2016, the capstone at the University of Alabama at Birmingham (UAB) could feel the external forces of a maturing Clinical and Nursing Informatics discipline. While the program had a capstone course for many years, it was lacking the depth of knowledge and demonstration of skills being asked for by those hiring in a maturing Informatics field. Since the program is online, all capstones were always in the online environment. While this modality did not change, other contributors to instruction modality changed. Pre-2016, the instruction modality was self-guided. Students checked in with a single instructor, and that instructor monitored progress across all capstones toward a PowerPoint and written paper deliverable. At the time, the enrollment was few, and the maturity had not yet pushed hard enough. By 2017, doubling enrollment and the increased demand of a more rigorously trained workforce led to restructuring the capstone so that graduates would have and retain the skills learned in the capstone process. There were three major changes: the capstone was broken up into a 3-course sequence (meaning it lasted about 10 months instead of 14 weeks), there were many chunks of deliverables, and each faculty had a cadre of about 5 students to advise through the capstone process. Literature suggests that the chunking, breaking up complex projects (i.e., the capstone in one summer) into smaller, more manageable chunks (i.e., chunks of the capstone across 3 semesters), can increase and sustain learning while allowing for increased rigor. By doing this, the teaching responsibility was shared across faculty with each semester course being taught by a different faculty member. This change facilitated delving much deeper in instruction and produced a significantly more rigorous final deliverable. Having students advised across the faculty seemed like the right thing to do. It not only shared the load, but also shared the success of students. Furthermore, it meant that students could be placed with an academic advisor who had expertise in their capstone area, further increasing the rigor of the entire capstone process and project and increasing student knowledge and skills.Keywords: capstones, clinical informatics, health informatics, informatics
Procedia PDF Downloads 12910935 Bundling of Transport Flows: Adoption Barriers and Opportunities
Authors: Vandenbroucke Karel, Georges Annabel, Schuurman Dimitri
Abstract:
In the past years, bundling of transport flows, whether or not implemented in an intermodal process, has popped up as a promising concept in the logistics sector. Bundling of transport flows is a process where two or more shippers decide to synergize their shipped goods over a common transport lane. Promoted by the European Commission, several programs have been set up and have shown their benefits. Bundling promises both shippers and logistics service providers economic, societal and ecological benefits. By bundling transport flows and thus reducing truck (or other carrier) capacity, the problems of driver shortage, increased fuel prices, mileage charges and restricted hours of service on the road are solved. In theory, the advantages of bundled transport exceed the drawbacks, however, in practice adoption among shippers remains low. In fact, bundling is mentioned as a disruptive process in the rather traditional logistics sector. In this context, a Belgian company asked iMinds Living Labs to set up a Living Lab research project with the goal to investigate how the uptake of bundling transport flows can be accelerated and to check whether an online data sharing platform can overcome the adoption barriers. The Living Lab research was conducted in 2016 and combined quantitative and qualitative end-user and market research. Concretely, extensive desk research was conducted and combined with insights from expert interviews with four consultants active in the Belgian logistics sector and in-depth interviews with logistics professionals working for shippers (N=10) and LSP’s (N=3). In the article, we present findings which show that there are several factors slowing down the uptake of bundling transport flows. Shippers are hesitant to change how they currently work and they are hesitant to work together with other shippers. Moreover, several practical challenges impede shippers to work together. We also present some opportunities that can accelerate the adoption of bundling of transport flows. First, it seems that there is not enough support coming from governmental and commercial organizations. Secondly, there is the chicken and the egg problem: too few interested parties will lead to no or very few matching lanes. Shippers are therefore reluctant to partake in these projects because the benefits have not yet been proven. Thirdly, the incentive is not big enough for shippers. Road transport organized by the shipper individually is still seen as the easiest and cheapest solution. A solution for the abovementioned challenges might be found in the online data sharing platform of the Belgian company. The added value of this platform is showing shippers possible matching lanes, without the shippers having to invest time in negotiating and networking with other shippers and running the risk of not finding a match. The interviewed shippers and experts indicated that the online data sharing platform is a very promising concept which could accelerate the uptake of bundling of transport flows.Keywords: adoption barriers, bundling of transport, shippers, transport optimization
Procedia PDF Downloads 20010934 Forensic Applications of Quantum Dots
Authors: Samaneh Nabavi, Hadi Shirzad, Somayeh Khanjani, Shirin Jalili
Abstract:
Quantum dots (QDs) are semiconductor nanocrystals that exhibit intrinsic optical and electrical properties that are size dependent due to the quantum confinement effect. Quantum confinement is brought about by the fact that in bulk semiconductor material the electronic structure consists of continuous bands, and that as the size of the semiconductor material decreases its radius becomes less than the Bohr exciton radius (the distance between the electron and electron-hole) and discrete energy levels result. As a result QDs have a broad absorption range and a narrow emission which correlates to the band gap energy (E), and hence QD size. QDs can thus be tuned to give the desired wavelength of fluorescence emission.Due to their unique properties, QDs have attracted considerable attention in different scientific areas. Also, they have been considered for forensic applications in recent years. The ability of QDs to fluoresce up to 20 times brighter than available fluorescent dyes makes them an attractive nanomaterial for enhancing the visualization of latent fingermarks, or poorly developed fingermarks. Furthermore, the potential applications of QDs in the detection of nitroaromatic explosives, such as TNT, based on directive fluorescence quenching of QDs, electron transfer quenching process or fluorescence resonance energy transfer have been paid to attention. DNA analysis is associated tightly with forensic applications in molecular diagnostics. The amount of DNA acquired at a criminal site is inherently limited. This limited amount of human DNA has to be quantified accurately after the process of DNA extraction. Accordingly, highly sensitive detection of human genomic DNA is an essential issue for forensic study. QDs have also a variety of advantages as an emission probe in forensic DNA quantification.Keywords: forensic science, quantum dots, DNA typing, explosive sensor, fingermark analysis
Procedia PDF Downloads 85310933 Simultaneous Removal of Phosphate and Ammonium from Eutrophic Water Using Dolochar Based Media Filter
Authors: Prangya Ranjan Rout, Rajesh Roshan Dash, Puspendu Bhunia
Abstract:
With the aim of enhancing the nutrient (ammonium and phosphate) removal from eutrophic wastewater with reduced cost, a novel media based multistage bio filter with drop aeration facility was developed in this work. The bio filter was packed with a discarded sponge iron industry by product, ‘dolochar’ primarily to remove phosphate via physicochemical approach. In the multi stage bio-filter drop, aeration was achieved by the process of percolation of the gravity-fed wastewater through the filter media and dropping down of wastewater from stage to stage. Ammonium present in wastewater got adsorbed by the filter media and biomass grown on the filter media and subsequently, got converted to nitrate through biological nitrification in the aerobic condition, as realized by drop aeration. The performance of the bio-filter in treating real eutrophic wastewater was monitored for a period of about 2 months. The influent phosphate concentration was in the range of 16-19 mg/L, and ammonium concentration was in the range of 65-78 mg/L. The average nutrient removal efficiency observed during the study period were 95.2% for phosphate and 88.7% for ammonium, with mean final effluent concentration of 0.91, and 8.74 mg/L, respectively. Furthermore, the subsequent release of nutrient from the saturated filter media, after completion of treatment process has been undertaken in this study and thin layer funnel analytical test results reveal the slow nutrient release nature of spent dolochar, thereby, recommending its potential agricultural application. Thus, the bio-filter displays immense prospective for treating real eutrophic wastewater, significantly decreasing the level of nutrients and keeping the effluent nutrient concentrations at par with the permissible limit and more importantly, facilitating the conversion of the waste materials into usable ones.Keywords: ammonium removal, phosphate removal, multi-stage bio-filter, dolochar
Procedia PDF Downloads 19310932 Numerical Modeling of Air Shock Wave Generated by Explosive Detonation and Dynamic Response of Structures
Authors: Michał Lidner, Zbigniew SzcześNiak
Abstract:
The ability to estimate blast load overpressure properly plays an important role in safety design of buildings. The issue of studying of blast loading on structural elements has been explored for many years. However, in many literature reports shock wave overpressure is estimated with simplified triangular or exponential distribution in time. This indicates some errors when comparing real and numerical reaction of elements. Nonetheless, it is possible to further improve setting similar to the real blast load overpressure function versus time. The paper presents a method of numerical analysis of the phenomenon of the air shock wave propagation. It uses Finite Volume Method and takes into account energy losses due to a heat transfer with respect to an adiabatic process rule. A system of three equations (conservation of mass, momentum and energy) describes the flow of a volume of gaseous medium in the area remote from building compartments, which can inhibit the movement of gas. For validation three cases of a shock wave flow were analyzed: a free field explosion, an explosion inside a steel insusceptible tube (the 1D case) and an explosion inside insusceptible cube (the 3D case). The results of numerical analysis were compared with the literature reports. Values of impulse, pressure, and its duration were studied. Finally, an overall good convergence of numerical results with experiments was achieved. Also the most important parameters were well reflected. Additionally analyses of dynamic response of one of considered structural element were made.Keywords: adiabatic process, air shock wave, explosive, finite volume method
Procedia PDF Downloads 19010931 Simulation-Based Evaluation of Indoor Air Quality and Comfort Control in Non-Residential Buildings
Authors: Torsten Schwan, Rene Unger
Abstract:
Simulation of thermal and electrical building performance more and more becomes part of an integrative planning process. Increasing requirements on energy efficiency, the integration of volatile renewable energy, smart control and storage management often cause tremendous challenges for building engineers and architects. This mainly affects commercial or non-residential buildings. Their energy consumption characteristics significantly distinguish from residential ones. This work focuses on the many-objective optimization problem indoor air quality and comfort, especially in non-residential buildings. Based on a brief description of intermediate dependencies between different requirements on indoor air treatment it extends existing Modelica-based building physics models with additional system states to adequately represent indoor air conditions. Interfaces to corresponding HVAC (heating, ventilation, and air conditioning) system and control models enable closed-loop analyzes of occupants' requirements and energy efficiency as well as profitableness aspects. A complex application scenario of a nearly-zero-energy school building shows advantages of presented evaluation process for engineers and architects. This way, clear identification of air quality requirements in individual rooms together with realistic model-based description of occupants' behavior helps to optimize HVAC system already in early design stages. Building planning processes can be highly improved and accelerated by increasing integration of advanced simulation methods. Those methods mainly provide suitable answers on engineers' and architects' questions regarding more exuberant and complex variety of suitable energy supply solutions.Keywords: indoor air quality, dynamic simulation, energy efficient control, non-residential buildings
Procedia PDF Downloads 23110930 Characterization of Candlenut Shells and Its Application to Remove Oil and Fine Solids of Produced Water in Nutshell Filters of Water Cleaning Plant
Authors: Annur Suhadi, Haris B. Harahap, Zaim Arrosyidi, Epan, Darmapala
Abstract:
Oilfields under waterflood often face the problem of plugging injectors either by internal filtration or external filter cake built up inside pore throats. The content of suspended solids shall be reduced to required level of filtration since corrective action of plugging is costly expensive. The performance of nutshell filters, where filtration takes place, is good using pecan and walnut shells. Candlenut shells were used instead of pecan and walnut shells since they were abundant in Indonesia, Malaysia, and East Africa. Physical and chemical properties of walnut, pecan, and candlenut shells were tested and the results were compared. Testing, using full-scale nutshell filters, was conducted to determine the oil content, turbidity, and suspended solid removal, which was based on designed flux rate. The performance of candlenut shells, which were deeply bedded in nutshell filters for filtration process, was monitored. Cleaned water outgoing nutshell filters had total suspended solids of 17 ppm, while oil content could be reduced to 15.1 ppm. Turbidity, using candlenut shells, was below the specification for injection water, which was less than 10 Nephelometric Turbidity Unit (NTU). Turbidity of water, outgoing nutshell filter, was ranged from 1.7-5.0 NTU at various dates of operation. Walnut, pecan, and candlenut shells had moisture content of 8.98 wt%, 10.95 wt%, and 9.95 wt%, respectively. The porosity of walnut, pecan, and candlenut shells was significantly affected by moisture content. Candlenut shells had property of toluene solubility of 7.68 wt%, which was much higher than walnut shells, reflecting more crude oil adsorption. The hardness of candlenut shells was 2.5-3 Mohs, which was close to walnut shells’ hardness. It was advantage to guarantee the cleaning filter cake by fluidization process during backwashing.Keywords: candlenut shells, filtration, nutshell filter, pecan shells, walnut shells
Procedia PDF Downloads 109