Search results for: process variation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17000

Search results for: process variation

10910 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms

Authors: Dimitrios Kafetzopoulos

Abstract:

Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.

Keywords: incremental operational changes, radical operational changes, efficiency, sustainability

Procedia PDF Downloads 123
10909 Hidden Hot Spots: Identifying and Understanding the Spatial Distribution of Crime

Authors: Lauren C. Porter, Andrew Curtis, Eric Jefferis, Susanne Mitchell

Abstract:

A wealth of research has been generated examining the variation in crime across neighborhoods. However, there is also a striking degree of crime concentration within neighborhoods. A number of studies show that a small percentage of street segments, intersections, or addresses account for a large portion of crime. Not surprisingly, a focus on these crime hot spots can be an effective strategy for reducing community level crime and related ills, such as health problems. However, research is also limited in an important respect. Studies tend to use official data to identify hot spots, such as 911 calls or calls for service. While the use of call data may be more representative of the actual level and distribution of crime than some other official measures (e.g. arrest data), call data still suffer from the 'dark figure of crime.' That is, there is most certainly a degree of error between crimes that occur versus crimes that are reported to the police. In this study, we present an alternative method of identifying crime hot spots, that does not rely on official data. In doing so, we highlight the potential utility of neighborhood-insiders to identify and understand crime dynamics within geographic spaces. Specifically, we use spatial video and geo-narratives to record the crime insights of 36 police, ex-offenders, and residents of a high crime neighborhood in northeast Ohio. Spatial mentions of crime are mapped to identify participant-identified hot spots, and these are juxtaposed with calls for service (CFS) data. While there are bound to be differences between these two sources of data, we find that one location, in particular, a corner store, emerges as a hot spot for all three groups of participants. Yet it does not emerge when we examine CFS data. A closer examination of the space around this corner store and a qualitative analysis of narrative data reveal important clues as to why this store may indeed be a hot spot, but not generate disproportionate calls to the police. In short, our results suggest that researchers who rely solely on official data to study crime hot spots may risk missing some of the most dangerous places.

Keywords: crime, narrative, video, neighborhood

Procedia PDF Downloads 224
10908 Thermal Method Production of the Hydroxyapatite from Bone By-Products from Meat Industry

Authors: Agnieszka Sobczak-Kupiec, Dagmara Malina, Klaudia Pluta, Wioletta Florkiewicz, Bozena Tyliszczak

Abstract:

Introduction: Request for compound of phosphorus grows continuously, thus, it is searched for alternative sources of this element. One of these sources could be by-products from meat industry which contain prominent quantity of phosphorus compounds. Hydroxyapatite, which is natural component of animal and human bones, is leading material applied in bone surgery and also in stomatology. This is material, which is biocompatible, bioactive and osteoinductive. Methodology: Hydroxyapatite preparation: As a raw material was applied deproteinized and defatted bone pulp called bone sludge, which was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Hydroxyapatite was received in calcining process in chamber kiln with electric heating in air atmosphere in two stages. In the first stage, material was calcining in temperature 600°C within 3 hours. In the next stage unified material was calcining in three different temperatures (750°C, 850°C and 950°C) keeping material in maximum temperature within 3.0 hours. Bone sludge: Bone sludge was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Pork bones coming from the partition of meat were used as a raw material for the production of the protein hydrolysate. After disintegration, a mixture of bone pulp and water with a small amount of lactic acid was boiled at temperature 130-135°C and under pressure4 bar. After 3-3.5 hours boiled-out bones were separated on a sieve, and the solution of protein-fat hydrolysate got into a decanter, where bone sludge was separated from it. Results of the study: The phase composition was analyzed by roentgenographic method. Hydroxyapatite was the only crystalline phase observed in all the calcining products. XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Conclusion: The researches were shown that phosphorus content is around 12%, whereas, calcium content amounts to 28% on average. The conducted researches on bone-waste calcining at the temperatures of 750-950°C confirmed that thermal utilization of deproteinized bone-waste was possible. X-ray investigations were confirmed that hydroxyapatite is the main component of calcining products, and also XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Contents of calcium and phosphorus were distinctly increased with calcining temperature, whereas contents of phosphorus soluble in acids were decreased. It could be connected with higher crystallization degree of material received in higher temperatures and its stable structure. Acknowledgements: “The authors would like to thank the The National Centre for Research and Development (Grant no: LIDER//037/481/L-5/13/NCBR/2014) for providing financial support to this project”.

Keywords: bone by-products, bone sludge, calcination, hydroxyapatite

Procedia PDF Downloads 273
10907 An Efficient Approach for Shear Behavior Definition of Plant Stalk

Authors: M. R. Kamandar, J. Massah

Abstract:

The information of the impact cutting behavior of plants stalk plays an important role in the design and fabrication of plants cutting equipment. It is difficult to investigate a theoretical method for defining cutting properties of plants stalks because the cutting process is complex. Thus, it is necessary to set up an experimental approach to determine cutting parameters for a single stalk. To measure the shear force, shear energy and shear strength of plant stalk, a special impact cutting tester was fabricated. It was similar to an Izod impact cutting tester for metals but a cutting blade and data acquisition system were attached to the end of pendulum's arm. The apparatus was included four strain gages and a digital indicator to show the real-time cutting force of plant stalk. To measure the shear force and also testing the apparatus, two plants’ stalks, like buxus and privet, were selected. The samples (buxus and privet stalks) were cut under impact cutting process at four loading rates 1, 2, 3 and 4 m.s-1 and three internodes fifth, tenth and fifteenth by the apparatus. At buxus cutting analysis: the minimum value of cutting energy was obtained at fifth internode and loading rate 4 m.s-1 and the maximum value of shear energy was obtained at fifteenth internode and loading rate 1 m.s-1. At privet cutting analysis: the minimum value of shear consumption energy was obtained at fifth internode and loading rate: 4 m.s-1 and the maximum value of shear energy was obtained at fifteenth internode and loading rate: 1 m.s-1. The statistical analysis at both plants showed that the increase of impact cutting speed would decrease the shear consumption energy and shear strength. In two scenarios, the results showed that with increase the cutting speed, shear force would decrease.

Keywords: Buxus, Privet, impact cutting, shear energy

Procedia PDF Downloads 108
10906 Performance Evaluation of Polyethyleneimine/Polyethylene Glycol Functionalized Reduced Graphene Oxide Membranes for Water Desalination via Forward Osmosis

Authors: Mohamed Edokali, Robert Menzel, David Harbottle, Ali Hassanpour

Abstract:

Forward osmosis (FO) process has stood out as an energy-efficient technology for water desalination and purification, although the practical application of FO for desalination still relies on RO-based Thin Film Composite (TFC) and Cellulose Triacetate (CTA) polymeric membranes which have a low performance. Recently, graphene oxide (GO) laminated membranes have been considered an ideal selection to overcome the bottleneck of the FO-polymeric membranes owing to their simple fabrication procedures, controllable thickness and pore size and high water permeability rates. However, the low stability of GO laminates in wet and harsh environments is still problematic. The recent developments of modified GO and hydrophobic reduced graphene oxide (rGO) membranes for FO desalination have demonstrated attempts to overcome the ongoing trade-off between desalination performance and stability, which is yet to be achieved prior to the practical implementation. In this study, acid-functionalized GO nanosheets cooperatively reduced and crosslinked by the hyperbranched polyethyleneimine (PEI) and polyethylene glycol (PEG) polymers, respectively, are applied for fabrication of the FO membrane, to enhance the membrane stability and performance, and compared with other functionalized rGO-FO membranes. PEI/PEG doped rGO membrane retained two compacted d-spacings (0.7 and 0.31 nm) compared to the acid-functionalized GO membrane alone (0.82 nm). Besides increasing the hydrophilicity, the coating layer of PEG onto the PEI-doped rGO membrane surface enhanced the structural integrity of the membrane chemically and mechanically. As a result of these synergetic effects, the PEI/PEG doped rGO membrane exhibited a water permeation of 7.7 LMH, salt rejection of 97.9 %, and reverse solute flux of 0.506 gMH at low flow rates in the FO desalination process.

Keywords: desalination, forward osmosis, membrane performance, polyethyleneimine, polyethylene glycol, reduced graphene oxide, stability

Procedia PDF Downloads 83
10905 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0

Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini

Abstract:

Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.

Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling

Procedia PDF Downloads 79
10904 Using the Yield-SAFE Model to Assess the Impacts of Climate Change on Yield of Coffee (Coffea arabica L.) Under Agroforestry and Monoculture Systems

Authors: Tesfay Gidey Bezabeh, Tânia Sofia Oliveira, Josep Crous-Duran, João H. N. Palma

Abstract:

Ethiopia's economy depends strongly on Coffea arabica production. Coffee, like many other crops, is sensitive to climate change. An urgent development and application of strategies against the negative impacts of climate change on coffee production is important. Agroforestry-based system is one of the strategies that may ensure sustainable coffee production amidst the likelihood of future impacts of climate change. This system involves the combination of trees in buffer extremes, thereby modifying microclimate conditions. This paper assessed coffee production under 1) coffee monoculture and 2) coffee grown using an agroforestry system, under a) current climate and b) two different future climate change scenarios. The study focused on two representative coffee-growing regions of Ethiopia under different soil, climate, and elevation conditions. A process-based growth model (Yield-SAFE) was used to simulate coffee production for a time horizon of 40 years. Climate change scenarios considered were representative concentration pathways (RCP) 4.5 and 8.5. The results revealed that in monoculture systems, the current coffee yields are between 1200-1250 kg ha⁻¹ yr⁻¹, with an expected decrease between 4-38% and 20-60% in scenarios RCP 4.5 and 8.5, respectively. However, in agroforestry systems, the current yields are between 1600-2200 kg ha⁻¹ yr⁻¹; the decrease was lower, ranging between 4-13% and 16-25% in RCP 4.5 and 8.5 scenarios, respectively. From the results, it can be concluded that coffee production under agroforestry systems has a higher level of resilience when facing future climate change and reinforces the idea of using this type of management in the near future for adapting climate change's negative impacts on coffee production.

Keywords: Albizia gummifera, CORDEX, Ethiopia, HADCM3 model, process-based model

Procedia PDF Downloads 93
10903 Study of Three-Dimensional Computed Tomography of Frontoethmoidal Cells Using International Frontal Sinus Anatomy Classification

Authors: Prabesh Karki, Shyam Thapa Chettri, Bajarang Prasad Sah, Manoj Bhattarai, Sudeep Mishra

Abstract:

Introduction: Frontal sinus is frequently described as the most difficult sinus to access surgically due to its proximity to the cribriform plate, orbit, and anterior ethmoid artery. Frontal sinus surgery requires a detailed understanding of the cellular structure and FSDP unique to each patient, making high-resolution CT scans an indispensable tool to assess the difficulty of planned sinus surgery. International Frontal Sinus Anatomy Classification (IFAC) was developed to provide a more precise nomenclature for cells in the frontal recess, classifying cells based on their anatomic origin. Objectives: To assess the proportion of frontal cell variants defined by IFAC, variation with respect to age and gender. Methods: 54 cases were enrolled after a detailed clinical history, thorough general and physical examinations, and CT a report ordered in a film. Assessment and tabulation of the presence of frontal cells according to the IFAC analyzed. The prevalence of each cell type was calculated, and data were entered in MS Excel and analyzed using Statistical Package for the Social Sciences (SPSS). Descriptive statistics and frequencies were defined for categorical and numerical variables. Frequency, percentage, the mean and standard deviation were calculated. Result: Among 54 patients, 30 (55.6%) were male and 24 (44.4%) were female. The patient enrolled ranged from 18 to 78 years. Majority33.3% (n=18) were in age group of >50 years.According to IFAC, Agger nasi cells (92.6%) were most common, whereas supraorbital ethmoidal cells were least common 16 (29.6%). Prevalence of other frontoethmoidal cells was SAC- 57.4%, SAFC- 38.9%, SBC- 74.1%, SBFC- 33.3%, FSC- 38.9% of 54 cases. Conclusion: IFAC is an international consensus document that describes an anatomically precise nomenclature for classifying frontoethmoidal cells' anatomy. This study has defined the prevalence, symmetry and reliability of frontoethmoidal cells as established by the IFAC system as in other parts of the world.

Keywords: frontal sinus, frontoethmoidal cells, international frontal sinus anatomy classification

Procedia PDF Downloads 79
10902 Amazonian Native Biomass Residue for Sustainable Development of Isolated Communities

Authors: Bruna C. Brasileiro, José Alberto S. Sá, Brigida R. P. Rocha

Abstract:

The Amazon region development was related to large-scale projects associated with economic cycles. Economic cycles were originated from policies implemented by successive governments that exploited the resources and have not yet been able to improve the local population's quality of life. These implanted development strategies were based on vertical planning centered on State that didn’t know and showed no interest in know the local needs and potentialities. The future of this region is a challenge that depends on a model of development based on human progress associated to intelligent, selective and environmentally safe exploitation of natural resources settled in renewable and no-polluting energy generation sources – a differential factor of attraction of new investments in a context of global energy and environmental crisis. In this process the planning and support of Brazilian State, local government, and selective international partnership are essential. Residual biomass utilization allows the sustainable development by the integration of production chain and energy generation process which could improve employment condition and income of riversides. Therefore, this research discourses how the use of local residual biomass (açaí lumps) could be an important instrument of sustainable development for isolated communities located at Alcobaça Sustainable Development Reserve (SDR), Tucuruí, Pará State, since in this region the energy source more accessible for who can pay are the fossil fuels that reaches about 54% of final energy consumption by the integration between the açaí productive chain and the use of renewable energy source besides it can promote less environmental impact and decrease the use of fossil fuels and carbon dioxide emissions.

Keywords: Amazon, biomass, renewable energy, sustainability

Procedia PDF Downloads 295
10901 Navigating the Assessment Landscape in English Language Teaching: Strategies, Challengies and Best Practices

Authors: Saman Khairani

Abstract:

Assessment is a pivotal component of the teaching and learning process, serving as a critical tool for evaluating student progress, diagnosing learning needs, and informing instructional decisions. In the context of English Language Teaching (ELT), effective assessment practices are essential to promote meaningful learning experiences and foster continuous improvement in language proficiency. This paper delves into various assessment strategies, explores associated challenges, and highlights best practices for assessing student learning in ELT. The paper begins by examining the diverse forms of assessment, including formative assessments that provide timely feedback during the learning process and summative assessments that evaluate overall achievement. Additionally, alternative methods such as portfolios, self-assessment, and peer assessment play a significant role in capturing various aspects of language learning. Aligning assessments with learning objectives is crucial. Educators must ensure that assessment tasks reflect the desired language skills, communicative competence, and cultural awareness. Validity, reliability, and fairness are essential considerations in assessment design. Challenges in assessing language skills—such as speaking, listening, reading, and writing—are discussed, along with practical solutions. Constructive feedback, tailored to individual learners, guides their language development. In conclusion, this paper synthesizes research findings and practical insights, equipping ELT practitioners with the knowledge and tools necessary to design, implement, and evaluate effective assessment practices. By fostering meaningful learning experiences, educators contribute significantly to learners’ language proficiency and overall success.

Keywords: ELT, formative, summative, fairness, validity, reliability

Procedia PDF Downloads 40
10900 Children with Migration Backgrounds in Russian Elementary Schools: Teachers Attitudes and Practices

Authors: Chulpan Gromova, Rezeda Khairutdinova, Dina Birman

Abstract:

One of the most significant issues that schools all over the world face today is the ways teachers respond to increasing diversity. The study was informed by the tripartite model of multicultural competence, with awareness of personal biases a necessary component, together with knowledge of different cultures, and skills to work with students from diverse backgrounds. The paper presents the results of qualitative descriptive studies that help to understand how school teachers in Russia treat migrant children, how they solve the problems of adaptation of migrant children. The purpose of this study was to determine: a) educational practices used by primary school teachers when working with migrant children; b) relationship between practices and attitudes of teachers. Empirical data were collected through interviews. The participants were informed that a conversation was being recorded. They were also warned that the study was voluntary, absolutely anonymous, no personal data was disclosed. Consent was received from 20 teachers. The findings were analyzed using directive content analysis (Graneheim and Lundman, 2004). The analysis was deductive according to the categories of practices and attitudes identified in the literature review and enriched inductively to identify variation within these categories. Studying practices is an essential part of preparing future teachers for working in a multicultural classroom. For language and academic support, teachers mostly use individual work. In order to create a friendly classroom climate and environment teachers have productive conversations with students, organize multicultural events for the whole school or just for an individual class. The majority of teachers have positive attitudes toward migrant children. In most cases, positive attitudes lead to high expectations for their academic achievements. Conceptual orientation of teacher attitudes toward cultural diversity is mostly pluralistic. Positive attitudes, high academic expectations and conceptual orientation toward pluralism are favorably reflected in teachers’ practice.

Keywords: intercultural education, migrant children schooling, teachers attitudes, teaching practices

Procedia PDF Downloads 101
10899 A Simple Chemical Approach to Regenerating Strength of Thermally Recycled Glass Fibre

Authors: Sairah Bashir, Liu Yang, John Liggat, James Thomason

Abstract:

Glass fibre is currently used as reinforcement in over 90% of all fibre-reinforced composites produced. The high rigidity and chemical resistance of these composites are required for optimum performance but unfortunately results in poor recyclability; when such materials are no longer fit for purpose, they are frequently deposited in landfill sites. Recycling technologies, for example, thermal treatment, can be employed to address this issue; temperatures typically between 450 and 600 °C are required to allow degradation of the rigid polymeric matrix and subsequent extraction of fibrous reinforcement. However, due to the severe thermal conditions utilised in the recycling procedure, glass fibres become too weak for reprocessing in second-life composite materials. In addition, more stringent legislation is being put in place regarding disposal of composite waste, and so it is becoming increasingly important to develop long-term recycling solutions for such materials. In particular, the development of a cost-effective method to regenerate strength of thermally recycled glass fibres will have a positive environmental effect as a reduced volume of composite material will be destined for landfill. This research study has demonstrated the positive impact of sodium hydroxide (NaOH) and potassium hydroxide (KOH) solution, prepared at relatively mild temperatures and at concentrations of 1.5 M and above, on the strength of heat-treated glass fibres. As a result, alkaline treatments can potentially be implemented to glass fibres that are recycled from composite waste to allow their reuse in second-life materials. The optimisation of the strength recovery process is being conducted by varying certain reaction parameters such as molarity of alkaline solution and treatment time. It is believed that deep V-shaped surface flaws exist commonly on severely damaged fibre surfaces and are effectively removed to form smooth, U-shaped structures following alkaline treatment. Although these surface flaws are believed to be present on glass fibres they have not in fact been observed, however, they have recently been discovered in this research investigation through analytical techniques such as AFM (atomic force microscopy) and SEM (scanning electron microscopy). Reaction conditions such as molarity of alkaline solution affect the degree of etching of the glass fibre surface, and therefore the extent to which fibre strength is recovered. A novel method in determining the etching rate of glass fibres after alkaline treatment has been developed, and the data acquired can be correlated with strength. By varying reaction conditions such as alkaline solution temperature and molarity, the activation energy of the glass etching process and the reaction order can be calculated respectively. The promising results obtained from NaOH and KOH treatments have opened an exciting route to strength regeneration of thermally recycled glass fibres, and the optimisation of the alkaline treatment process is being continued in order to produce recycled fibres with properties that match original glass fibre products. The reuse of such glass filaments indicates that closed-loop recycling of glass fibre reinforced composite (GFRC) waste can be achieved. In fact, the development of a closed-loop recycling process for GFRC waste is already underway in this research study.

Keywords: glass fibers, glass strengthening, glass structure and properties, surface reactions and corrosion

Procedia PDF Downloads 240
10898 The Application of Enzymes on Pharmaceutical Products and Process Development

Authors: Reginald Anyanwu

Abstract:

Enzymes are biological molecules that significantly regulate the rate of almost all of the chemical reactions that take place within cells, and have been widely used for products’ innovations. They are vital for life and serve a wide range of important functions in the body, such as aiding in digestion and metabolism. The present study was aimed at finding out the extent to which biological molecules have been utilized by pharmaceutical, food and beverage, and biofuel industries in commercial and scale up applications. Taking into account the escalating business opportunities in this vertical, biotech firms have also been penetrating enzymes industry especially that of food. The aim of the study therefore was to find out how biocatalysis can be successfully deployed; how enzyme application can improve industrial processes. To achieve the purpose of the study, the researcher focused on the analytical tools that are critical for the scale up implementation of enzyme immobilization to ascertain the extent of increased product yield at minimum logistical burden and maximum market profitability on the environment and user. The researcher collected data from four pharmaceutical companies located at Anambra state and Imo state of Nigeria. Questionnaire items were distributed to these companies. The researcher equally made a personal observation on the applicability of these biological molecules on innovative Products since there is now shifting trends toward the consumption of healthy and quality food. In conclusion, it was discovered that enzymes have been widely used for products’ innovations but there are however variations on their applications. It was also found out that pivotal contenders of enzymes market have lately been making heavy investments in the development of innovative product solutions. It was recommended that the applications of enzymes on innovative products should be widely practiced.

Keywords: enzymes, pharmaceuticals, process development, quality food consumption, scale-up applications

Procedia PDF Downloads 129
10897 Lateralisation of Visual Function in Yellow-Eyed Mullet (Aldrichetta forsteri) and Its Role in Schooling Behaviour

Authors: Karen L. Middlemiss, Denham G. Cook, Peter Jaksons, Alistair Jerrett, William Davison

Abstract:

Lateralisation of cognitive function is a common phenomenon found throughout the animal kingdom. Strong biases in functional behaviours have evolved from asymmetrical brain hemispheres which differ in structure and/or cognitive function. In fish, lateralisation is involved in visually mediated behaviours such as schooling, predator avoidance, and foraging, and is considered to have a direct impact on species fitness. Currently, there is very little literature on the role of lateralisation in fish schools. The yellow-eyed mullet (Aldrichetta forsteri), is an estuarine and coastal species found commonly throughout temperate regions of Australia and New Zealand. This study sought to quantify visually mediated behaviours in yellow-eyed mullet to identify the significance of lateralisation, and the factors which influence functional behaviours in schooling fish. Our approach to study design was to conduct a series of tank based experiments investigating; a) individual and population level lateralisation, b) schooling behaviour, and d) optic lobe anatomy. Yellow-eyed mullet showed individual variation in direction and strength of lateralisation in juveniles, and trait specific spatial positioning within the school was evidenced in strongly lateralised fish. In combination with observed differences in schooling behaviour, the possibility of ontogenetic plasticity in both behavioural lateralisation and optic lobe morphology in adults is suggested. These findings highlight the need for research into the genetic and environmental factors (epigenetics) which drive functional behaviours such as schooling, feeding and aggression. Improved knowledge on collective behaviour could have significant benefits to captive rearing programmes through improved culture techniques and will add to the limited body of knowledge on the complex ecophysiological interactions present in our inshore fisheries.

Keywords: cerebral asymmetry, fisheries, schooling, visual bias

Procedia PDF Downloads 202
10896 Effect of Naphtha in Addition to a Cycle Steam Stimulation Process Reducing the Heavy Oil Viscosity Using a Two-Level Factorial Design

Authors: Nora A. Guerrero, Adan Leon, María I. Sandoval, Romel Perez, Samuel Munoz

Abstract:

The addition of solvents in cyclic steam stimulation is a technique that has shown an impact on the improved recovery of heavy oils. In this technique, it is possible to reduce the steam/oil ratio in the last stages of the process, at which time this ratio increases significantly. The mobility of improved crude oil increases due to the structural changes of its components, which at the same time reflected in the decrease in density and viscosity. In the present work, the effect of the variables such as temperature, time, and weight percentage of naphtha was evaluated, using a factorial design of experiments 23. From the results of analysis of variance (ANOVA) and Pareto diagram, it was possible to identify the effect on viscosity reduction. The experimental representation of the crude-vapor-naphtha interaction was carried out in a batch reactor on a Colombian heavy oil of 12.8° API and 3500 cP. The conditions of temperature, reaction time, and percentage of naphtha were 270-300 °C, 48-66 hours, and 3-9% by weight, respectively. The results showed a decrease in density with values in the range of 0.9542 to 0.9414 g/cm³, while the viscosity decrease was in the order of 55 to 70%. On the other hand, simulated distillation results, according to ASTM 7169, revealed significant conversions of the 315°C+ fraction. From the spectroscopic techniques of nuclear magnetic resonance NMR, infrared FTIR and UV-VIS visible ultraviolet, it was determined that the increase in the performance of the light fractions in the improved crude is due to the breakdown of alkyl chains. The methodology for cyclic steam injection with naphtha and laboratory-scale characterization can be considered as a practical tool in improved recovery processes.

Keywords: viscosity reduction, cyclic steam stimulation, factorial design, naphtha

Procedia PDF Downloads 156
10895 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem

Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães

Abstract:

This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.

Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart

Procedia PDF Downloads 156
10894 Germination and Bulb Formation of Allium tuncelianum L. under in vitro Condition

Authors: Suleyman Kizil, Tahsin Sogut, Khalid M. Khawar

Abstract:

Genus Allium includes 600 to 750 species and most of these including Allium tuncelianum (Kollman) N. Ozhatay, B. Mathew & Siraneci; Syn; A. macrochaetum Boiss. and Hausskn. subsp. tuncelianum Kollman] or Tunceli garlic is endemic to Eastern Turkish Province of Tunceli and Munzur mountains. They are edible, bear attractive white-to-purple flowers and fertile black seeds with deep seed dormancy. This study aimed to break seed dormancy of Tunceli garlic and determine the conditions for induction of bulblets on these seeds and increase their diameter by culturing them on MS medium supplemented different strengths of KNO3. Tunceli garlic seeds were collected from field grown plants. They were germinated on MS medium with or without 20 g/l sucrose followed by their culture on 1 × 1900 mg/l, 2 × 1900 mg/l, 4 ×1900 mg/l and 6 × 1900 mg/l mg/l KNO3 supplemented with 20 g/l sucrose to increase bulb diameter. Improved seeds germination was noted on MS medium with and without sucrose but with variation compared to previous reports. The bulb development percentage on each of the sprouted seeds was not parallel to the percentage of seed germination. The results showed 34% and 28.5% bulb induction was noted on germinated seeds after 150 and 158 days on MS medium containing 20 g l-1 sucrose and no sucrose respectively showing a delay of 8 days on the latter compared to the former. The results emphatically noted role of cold stratification on agar solidified MS medium supplemented with sucrose to improve seed germination. The best increase in bulb diameter was noted on MS medium containing 1 × 1900 mg/l KNO3 after 178 days with bulblet diameter and bulblet weight of 0.54 cm and 0.048 g, respectively. Consequently, the bulbs induced on sucrose containing MS medium could be transferred to pots earlier. Increased (>1 × 1900 mg/l KNO3) strengths of KNO3 induced negative effect on growth and development of Tunceli garlic bulbs. The strategy of seed germination and bulblet induction reported in this study could be positively used for conservation of this endemic plant species.

Keywords: Tunceli garlic, seed, dormancy, bulblets, bulb growth

Procedia PDF Downloads 257
10893 Impact of Map Generalization in Spatial Analysis

Authors: Lin Li, P. G. R. N. I. Pussella

Abstract:

When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.

Keywords: generalization, GIS, scales, spatial analysis

Procedia PDF Downloads 320
10892 The Observable Method for the Regularization of Shock-Interface Interactions

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique that is capable of regularizing the shocks and sharp interfaces simultaneously in the shock-interface interaction simulations. The direct numerical simulation of flows involving shocks has been investigated for many years and a lot of numerical methods were developed to capture the shocks. However, most of these methods rely on the numerical dissipation to regularize the shocks. Moreover, in high Reynolds number flows, the nonlinear terms in hyperbolic Partial Differential Equations (PDE) dominates, constantly generating small scale features. This makes direct numerical simulation of shocks even harder. The same difficulty happens in two-phase flow with sharp interfaces where the nonlinear terms in the governing equations keep sharpening the interfaces to discontinuities. The main idea of the proposed technique is to average out the small scales that is below the resolution (observable scale) of the computational grid by filtering the convective velocity in the nonlinear terms in the governing PDE. This technique is named “observable method” and it results in a set of hyperbolic equations called observable equations, namely, observable Navier-Stokes or Euler equations. The observable method has been applied to the flow simulations involving shocks, turbulence, and two-phase flows, and the results are promising. In the current paper, the observable method is examined on the performance of regularizing shocks and interfaces at the same time in shock-interface interaction problems. Bubble-shock interactions and Richtmyer-Meshkov instability are particularly chosen to be studied. Observable Euler equations will be numerically solved with pseudo-spectral discretization in space and third order Total Variation Diminishing (TVD) Runge Kutta method in time. Results are presented and compared with existing publications. The interface acceleration and deformation and shock reflection are particularly examined.

Keywords: compressible flow simulation, inviscid regularization, Richtmyer-Meshkov instability, shock-bubble interactions.

Procedia PDF Downloads 339
10891 An In-Depth Experimental Study of Wax Deposition in Pipelines

Authors: Arias M. L., D’Adamo J., Novosad M. N., Raffo P. A., Burbridge H. P., Artana G.

Abstract:

Shale oils are highly paraffinic and, consequently, can create wax deposits that foul pipelines during transportation. Several factors must be considered when designing pipelines or treatment programs that prevents wax deposition: including chemical species in crude oils, flowrates, pipes diameters and temperature. This paper describes the wax deposition study carried out within the framework of Y-TEC's flow assurance projects, as part of the process to achieve a better understanding on wax deposition issues. Laboratory experiments were performed on a medium size, 1 inch diameter, wax deposition loop of 15 mts long equipped with a solid detector system, online microscope to visualize crystals, temperature and pressure sensors along the loop pipe. A baseline test was performed with diesel with no paraffin or additive content. Tests were undertaken with different temperatures of circulating and cooling fluid at different flow conditions. Then, a solution formed with a paraffin added to the diesel was considered. Tests varying flowrate and cooling rate were again run. Viscosity, density, WAT (Wax Appearance Temperature) with DSC (Differential Scanning Calorimetry), pour point and cold finger measurements were carried out to determine physical properties of the working fluids. The results obtained in the loop were analyzed through momentum balance and heat transfer models. To determine possible paraffin deposition scenarios temperature and pressure loop output signals were studied. They were compared with WAT static laboratory methods. Finally, we scrutinized the effect of adding a chemical inhibitor to the working fluid on the dynamics of the process of wax deposition in the loop.

Keywords: paraffin desposition, flow assurance, chemical inhibitors, flow loop

Procedia PDF Downloads 91
10890 Energy Production with Closed Methods

Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani

Abstract:

In Kosovo, the problem with the electricity supply is huge and does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime - Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product is obtained - gas, which passes through the carburetor, which enables the gas combustion process and puts into operation the internal combustion machine and the generator and produces electricity that does not release gases into the atmosphere. The obtained results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that: in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.

Keywords: energy, heating, atmosphere, waste, gasification

Procedia PDF Downloads 219
10889 Performance Analysis of Three Absorption Heat Pump Cycles, Full and Partial Loads Operations

Authors: B. Dehghan, T. Toppi, M. Aprile, M. Motta

Abstract:

The environmental concerns related to global warming and ozone layer depletion along with the growing worldwide demand for heating and cooling have brought an increasing attention toward ecological and efficient Heating, Ventilation, and Air Conditioning (HVAC) systems. Furthermore, since space heating accounts for a considerable part of the European primary/final energy use, it has been identified as one of the sectors with the most challenging targets in energy use reduction. Heat pumps are commonly considered as a technology able to contribute to the achievement of the targets. Current research focuses on the full load operation and seasonal performance assessment of three gas-driven absorption heat pump cycles. To do this, investigations of the gas-driven air-source ammonia-water absorption heat pump systems for small-scale space heating applications are presented. For each of the presented cycles, both full-load under various temperature conditions and seasonal performances are predicted by means of numerical simulations. It has been considered that small capacity appliances are usually equipped with fixed geometry restrictors, meaning that the solution mass flow rate is driven by the pressure difference across the associated restrictor valve. Results show that gas utilization efficiency (GUE) of the cycles varies between 1.2 and 1.7 for both full and partial loads and vapor exchange (VX) cycle is found to achieve the highest efficiency. It is noticed that, for typical space heating applications, heat pumps operate over a wide range of capacities and thermal lifts. Thus, partially, the novelty introduced in the paper is the investigation based on a seasonal performance approach, following the method prescribed in a recent European standard (EN 12309). The overall result is a modest variation in the seasonal performance for analyzed cycles, from 1.427 (single-effect) to 1.493 (vapor-exchange).

Keywords: absorption cycles, gas utilization efficiency, heat pump, seasonal performance, vapor exchange cycle

Procedia PDF Downloads 95
10888 Lead Removal From Ex- Mining Pond Water by Electrocoagulation: Kinetics, Isotherm, and Dynamic Studies

Authors: Kalu Uka Orji, Nasiman Sapari, Khamaruzaman W. Yusof

Abstract:

Exposure of galena (PbS), tealite (PbSnS2), and other associated minerals during mining activities release lead (Pb) and other heavy metals into the mining water through oxidation and dissolution. Heavy metal pollution has become an environmental challenge. Lead, for instance, can cause toxic effects to human health, including brain damage. Ex-mining pond water was reported to contain lead as high as 69.46 mg/L. Conventional treatment does not easily remove lead from water. A promising and emerging treatment technology for lead removal is the application of the electrocoagulation (EC) process. However, some of the problems associated with EC are systematic reactor design, selection of maximum EC operating parameters, scale-up, among others. This study investigated an EC process for the removal of lead from synthetic ex-mining pond water using a batch reactor and Fe electrodes. The effects of various operating parameters on lead removal efficiency were examined. The results obtained indicated that the maximum removal efficiency of 98.6% was achieved at an initial PH of 9, the current density of 15mA/cm2, electrode spacing of 0.3cm, treatment time of 60 minutes, Liquid Motion of Magnetic Stirring (LM-MS), and electrode arrangement = BP-S. The above experimental data were further modeled and optimized using a 2-Level 4-Factor Full Factorial design, a Response Surface Methodology (RSM). The four factors optimized were the current density, electrode spacing, electrode arrangements, and Liquid Motion Driving Mode (LM). Based on the regression model and the analysis of variance (ANOVA) at 0.01%, the results showed that an increase in current density and LM-MS increased the removal efficiency while the reverse was the case for electrode spacing. The model predicted the optimal lead removal efficiency of 99.962% with an electrode spacing of 0.38 cm alongside others. Applying the predicted parameters, the lead removal efficiency of 100% was actualized. The electrode and energy consumptions were 0.192kg/m3 and 2.56 kWh/m3 respectively. Meanwhile, the adsorption kinetic studies indicated that the overall lead adsorption system belongs to the pseudo-second-order kinetic model. The adsorption dynamics were also random, spontaneous, and endothermic. The higher temperature of the process enhances adsorption capacity. Furthermore, the adsorption isotherm fitted the Freundlish model more than the Langmuir model; describing the adsorption on a heterogeneous surface and showed good adsorption efficiency by the Fe electrodes. Adsorption of Pb2+ onto the Fe electrodes was a complex reaction, involving more than one mechanism. The overall results proved that EC is an efficient technique for lead removal from synthetic mining pond water. The findings of this study would have application in the scale-up of EC reactor and in the design of water treatment plants for feed-water sources that contain lead using the electrocoagulation method.

Keywords: ex-mining water, electrocoagulation, lead, adsorption kinetics

Procedia PDF Downloads 137
10887 Deasphalting of Crude Oil by Extraction Method

Authors: A. N. Kurbanova, G. K. Sugurbekova, N. K. Akhmetov

Abstract:

The asphaltenes are heavy fraction of crude oil. Asphaltenes on oilfield is known for its ability to plug wells, surface equipment and pores of the geologic formations. The present research is devoted to the deasphalting of crude oil as the initial stage refining oil. Solvent deasphalting was conducted by extraction with organic solvents (cyclohexane, carbon tetrachloride, chloroform). Analysis of availability of metals was conducted by ICP-MS and spectral feature at deasphalting was achieved by FTIR. High contents of asphaltenes in crude oil reduce the efficiency of refining processes. Moreover, high distribution heteroatoms (e.g., S, N) were also suggested in asphaltenes cause some problems: environmental pollution, corrosion and poisoning of the catalyst. The main objective of this work is to study the effect of deasphalting process crude oil to improve its properties and improving the efficiency of recycling processes. Experiments of solvent extraction are using organic solvents held in the crude oil JSC “Pavlodar Oil Chemistry Refinery. Experimental results show that deasphalting process also leads to decrease Ni, V in the composition of the oil. One solution to the problem of cleaning oils from metals, hydrogen sulfide and mercaptan is absorption with chemical reagents directly in oil residue and production due to the fact that asphalt and resinous substance degrade operational properties of oils and reduce the effectiveness of selective refining of oils. Deasphalting of crude oil is necessary to separate the light fraction from heavy metallic asphaltenes part of crude oil. For this oil is pretreated deasphalting, because asphaltenes tend to form coke or consume large quantities of hydrogen. Removing asphaltenes leads to partly demetallization, i.e. for removal of asphaltenes V/Ni and organic compounds with heteroatoms. Intramolecular complexes are relatively well researched on the example of porphyinous complex (VO2) and nickel (Ni). As a result of studies of V/Ni by ICP MS method were determined the effect of different solvents-deasphalting – on the process of extracting metals on deasphalting stage and select the best organic solvent. Thus, as the best DAO proved cyclohexane (C6H12), which as a result of ICP MS retrieves V-51.2%, Ni-66.4%? Also in this paper presents the results of a study of physical and chemical properties and spectral characteristics of oil on FTIR with a view to establishing its hydrocarbon composition. Obtained by using IR-spectroscopy method information about the specifics of the whole oil give provisional physical, chemical characteristics. They can be useful in the consideration of issues of origin and geochemical conditions of accumulation of oil, as well as some technological challenges. Systematic analysis carried out in this study; improve our understanding of the stability mechanism of asphaltenes. The role of deasphalted crude oil fractions on the stability asphaltene is described.

Keywords: asphaltenes, deasphalting, extraction, vanadium, nickel, metalloporphyrins, ICP-MS, IR spectroscopy

Procedia PDF Downloads 228
10886 Developing a Cultural Policy Framework for Small Towns and Cities

Authors: Raymond Ndhlovu, Jen Snowball

Abstract:

It has long been known that the Cultural and Creative Industries (CCIs) have the potential to aid in physical, social and economic renewal and regeneration of towns and cities, hence their importance when dealing with regional development. The CCIs can act as a catalyst for activity and investment in an area because the ‘consumption’ of cultural activities will lead to the activities and use of other non-cultural activities, for example, hospitality development including restaurants and bars, as well as public transport. ‘Consumption’ of cultural activities also leads to employment creation, and diversification. However, CCIs tend to be clustered, especially around large cities. There is, moreover, a case for development of CCIs around smaller towns and cities, because they do not rely on high technology inputs, and long supply chains, and, their direct link to rural and isolated places makes them vital in regional development. However, there is currently little research on how to craft cultural policy for regions with smaller towns and cities. Using the Sarah Baartman District (SBDM) in South Africa as an example, this paper describes the process of developing cultural policy for a region that has potential, and existing, cultural clusters, but currently no one, coherent policy relating to CCI development. The SBDM was chosen as a case study because it has no large cities, but has some CCI clusters, and has identified them as potential drivers of local economic development. The process of developing cultural policy is discussed in stages: Identification of what resources are present; including human resources, soft and hard infrastructure; Identification of clusters; Analysis of CCI labour markets and ownership patterns; Opportunities and challenges from the point of view of CCIs and other key stakeholders; Alignment of regional policy aims with provincial and national policy objectives; and finally, design and implementation of a regional cultural policy.

Keywords: cultural and creative industries, economic impact, intrinsic value, regional development

Procedia PDF Downloads 217
10885 Detection of the Effectiveness of Training Courses and Their Limitations Using CIPP Model (Case Study: Isfahan Oil Refinery)

Authors: Neda Zamani

Abstract:

The present study aimed to investigate the effectiveness of training courses and their limitations using the CIPP model. The investigations were done on Isfahan Refinery as a case study. From a purpose point of view, the present paper is included among applied research and from a data gathering point of view, it is included among descriptive research of the field type survey. The population of the study included participants in training courses, their supervisors and experts of the training department. Probability-proportional-to-size (PPS) was used as the sampling method. The sample size for participants in training courses included 195 individuals, 30 supervisors and 11 individuals from the training experts’ group. To collect data, a questionnaire designed by the researcher and a semi-structured interview was used. The content validity of the data was confirmed by training management experts and the reliability was calculated through 0.92 Cronbach’s alpha. To analyze the data in descriptive statistics aspect (tables, frequency, frequency percentage and mean) were applied, and inferential statistics (Mann Whitney and Wilcoxon tests, Kruskal-Wallis test to determine the significance of the opinion of the groups) have been applied. Results of the study indicated that all groups, i.e., participants, supervisors and training experts, absolutely believe in the importance of training courses; however, participants in training courses regard content, teacher, atmosphere and facilities, training process, managing process and product as to be in a relatively appropriate level. The supervisors also regard output to be at a relatively appropriate level, but training experts regard content, teacher and managing processes as to be in an appropriate and higher than average level.

Keywords: training courses, limitations of training effectiveness, CIPP model, Isfahan oil refinery company

Procedia PDF Downloads 52
10884 Modelling Dengue Disease With Climate Variables Using Geospatial Data For Mekong River Delta Region of Vietnam

Authors: Thi Thanh Nga Pham, Damien Philippon, Alexis Drogoul, Thi Thu Thuy Nguyen, Tien Cong Nguyen

Abstract:

Mekong River Delta region of Vietnam is recognized as one of the most vulnerable to climate change due to flooding and seawater rise and therefore an increased burden of climate change-related diseases. Changes in temperature and precipitation are likely to alter the incidence and distribution of vector-borne diseases such as dengue fever. In this region, the peak of the dengue epidemic period is around July to September during the rainy season. It is believed that climate is an important factor for dengue transmission. This study aims to enhance the capacity of dengue prediction by the relationship of dengue incidences with climate and environmental variables for Mekong River Delta of Vietnam during 2005-2015. Mathematical models for vector-host infectious disease, including larva, mosquito, and human being were used to calculate the impacts of climate to the dengue transmission with incorporating geospatial data for model input. Monthly dengue incidence data were collected at provincial level. Precipitation data were extracted from satellite observations of GSMaP (Global Satellite Mapping of Precipitation), land surface temperature and land cover data were from MODIS. The value of seasonal reproduction number was estimated to evaluate the potential, severity and persistence of dengue infection, while the final infected number was derived to check the outbreak of dengue. The result shows that the dengue infection depends on the seasonal variation of climate variables with the peak during the rainy season and predicted dengue incidence follows well with this dynamic for the whole studied region. However, the highest outbreak of 2007 dengue was not captured by the model reflecting nonlinear dependences of transmission on climate. Other possible effects will be discussed to address the limitation of the model. This suggested the need of considering of both climate variables and another variability across temporal and spatial scales.

Keywords: infectious disease, dengue, geospatial data, climate

Procedia PDF Downloads 367
10883 Numerical Modelling of Prestressed Geogrid Reinforced Soil System

Authors: Soukat Kumar Das

Abstract:

Rapid industrialization and increase in population has resulted in the scarcity of suitable ground conditions. It has driven the need of ground improvement by means of reinforcement with geosynthetics with the minimum possible settlement and with maximum possible safety. Prestressing the geosynthetics offers an economical yet safe method of gaining the goal. Commercially available software PLAXIS 3D has made the analysis of prestressed geosynthetics simpler with much practical simulations of the ground. Attempts have been made so far to analyse the effect of prestressing geosynthetics and the effect of interference of footing on Unreinforced (UR), Geogrid Reinforced (GR) and Prestressed Geogrid Reinforced (PGR) soil on the load bearing capacity and the settlement characteristics of prestressed geogrid reinforced soil using the numerical analysis by using the software PLAXIS 3D. The results of the numerical analysis have been validated and compared with those given in the referred paper. The results have been found to be in very good agreement with those of the actual field values with very small variation. The GR soil has been found to be improve the bearing pressure 240 % whereas the PGR soil improves it by almost 500 % for 1mm settlement. In fact, the PGR soil has enhanced the bearing pressure of the GR soil by almost 200 %. The settlement reduction has also been found to be very significant as for 100 kPa bearing pressure the settlement reduction of the PGR soil has been found to be about 88 % with respect to UR soil and it reduced to up to 67 % with respect to GR soil. The prestressing force has resulted in enhanced reinforcement mechanism, resulting in the increased bearing pressure. The deformation at the geogrid layer has been found to be 13.62 mm for GR soil whereas it decreased down to mere 3.5 mm for PGR soil which certainly ensures the effect of prestressing on the geogrid layer. The parameter Improvement factor or conventionally known as Bearing Capacity Ratio for different settlements and which depicts the improvement of the PGR with respect to UR and GR soil and the improvement of GR soil with respect to UR soil has been found to vary in the range of 1.66-2.40 in the present analysis for GR soil and was found to be vary between 3.58 and 5.12 for PGR soil with respect to UR soil. The effect of prestressing was also observed in case of two interfering square footings. The centre to centre distance between the two footings (SFD) was taken to be B, 1.5B, 2B, 2.5B and 3B where B is the width of the footing. It was found that for UR soil the improvement of the bearing pressure was up to 1.5B after which it remained almost same. But for GR soil the zone of influence rose up to 2B and for PGR it further went up to 2.5B. So the zone of interference for PGR soil has increased by 67% than Unreinforced (UR) soil and almost 25 % with respect to GR soil.

Keywords: bearing, geogrid, prestressed, reinforced

Procedia PDF Downloads 384
10882 Rheological Assessment of Oil Well Cement Paste Dosed with Cellulose Nanocrystal (CNC)

Authors: Mohammad Reza Dousti, Yaman Boluk, Vivek Bindiganavile

Abstract:

During the past few decades, oil and natural gas consumption have increased significantly. The limited amount of hydrocarbon resources on earth has led to a stronger desire towards efficient drilling, well completion and extracting, with the least time, energy and money wasted. Well cementing is one of the most crucial and important steps in any well completion, to fill the annulus between the casing string and the well bore. However, since it takes place at the end of the drilling process, a satisfying and acceptable job is rarely done. Hence, a large and significant amount of time and energy is then spent in order to do the required corrections or retrofitting the well in some cases. Oil well cement paste needs to be pumped during the cementing process, therefore the rheological and flow behavior of the paste is of great importance. This study examines the use of innovative cellulose-based nanomaterials on the flow properties of the resulting cementitious system. The cementitious paste developed in this research is composed of water, class G oil well cement, bentonite and cellulose nanocrystals (CNC). Bentonite is used as a cross contamination component. Initially, the influence of CNC on the flow and rheological behavior of CNC and bentonite suspensions was assessed. Furthermore, the rheological behavior of oil well cement pastes dosed with CNC was studied using a steady shear parallel-plate rheometer and the results were compared to the rheological behavior of a neat oil well cement paste with no CNC. The parameters assessed were the yield shear stress and the viscosity. Significant changes in yield shear stress and viscosity were observed due to the addition of the CNC. Based on the findings in this study, the addition of a very small dosage of CNC to the oil well cement paste results in a more viscous cement slurry with a higher yield stress, demonstrating a shear thinning behavior.

Keywords: cellulose nanocrystal, flow behavior, oil well cement, rheology

Procedia PDF Downloads 209
10881 An Exploratory Sequential Design: A Mixed Methods Model for the Statistics Learning Assessment with a Bayesian Network Representation

Authors: Zhidong Zhang

Abstract:

This study established a mixed method model in assessing statistics learning with Bayesian network models. There are three variants in exploratory sequential designs. There are three linked steps in one of the designs: qualitative data collection and analysis, quantitative measure, instrument, intervention, and quantitative data collection analysis. The study used a scoring model of analysis of variance (ANOVA) as a content domain. The research study is to examine students’ learning in both semantic and performance aspects at fine grain level. The ANOVA score model, y = α+ βx1 + γx1+ ε, as a cognitive task to collect data during the student learning process. When the learning processes were decomposed into multiple steps in both semantic and performance aspects, a hierarchical Bayesian network was established. This is a theory-driven process. The hierarchical structure was gained based on qualitative cognitive analysis. The data from students’ ANOVA score model learning was used to give evidence to the hierarchical Bayesian network model from the evidential variables. Finally, the assessment results of students’ ANOVA score model learning were reported. Briefly, this was a mixed method research design applied to statistics learning assessment. The mixed methods designs expanded more possibilities for researchers to establish advanced quantitative models initially with a theory-driven qualitative mode.

Keywords: exploratory sequential design, ANOVA score model, Bayesian network model, mixed methods research design, cognitive analysis

Procedia PDF Downloads 152