Search results for: requirement elicitation process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15705

Search results for: requirement elicitation process

10635 Balanced Score Card a Tool to Improve Naac Accreditation – a Case Study in Indian Higher Education

Authors: CA Kishore S. Peshori

Abstract:

Introduction: India, a country with vast diversity and huge population is going to have largest young population by 2020. Higher education has and will always be the basic requirement for making a developing nation to a developed nation. To improve any system it needs to be bench-marked. There have been various tools for bench-marking the systems. Education is delivered in India by universities which are mainly funded by government. This universities for delivering the education sets up colleges which are again funded mainly by government. Recently however there has also been autonomy given to universities and colleges. Moreover foreign universities are waiting to enter Indian boundaries. With a large number of universities and colleges it has become more and more necessary to measure this institutes for bench-marking. There have been various tools for measuring the institute. In India college assessments have been made compulsory by UGC. Naac has been offically recognised as the accrediation criteria. The Naac criteria has been based on seven criterias namely: 1. Curricular assessments, 2. Teaching learning and evaluation, 3. Research Consultancy and Extension, 4. Infrastructure and learning resources, 5. Student support and progression, 6. Governance leadership and management, 7. Innovation and best practices. The Naac tries to bench mark the institution for identification, sustainability, dissemination and adaption of best practices. It grades the institution according to this seven criteria and the funding of institution is based on these grades. Many of the colleges are struggling to get best of grades but they have not come across a systematic tool to achieve the results. Balanced Scorecard developed by Kaplan has been a successful tool for corporates to develop best of practices so as to increase their financial performance and also retain and increase their customers so as to grow the organization to next level.It is time to test this tool for an educational institute. Methodology: The paper tries to develop a prototype for college based on the secondary data. Once a prototype is developed the researcher based on questionnaire will try to test this tool for successful implementation. The success of this research will depend on its implementation of BSC on an institute and its grading improved due to this successful implementation. Limitation of time is a major constraint in this research as Naac cycle takes minimum 4 years for accreditation and reaccreditation the methodology will limit itself to secondary data and questionnaire to be circulated to colleges along with the prototype model of BSC. Conclusion: BSC is a successful tool for enhancing growth of an organization. Educational institutes are no exception to these. BSC will only have to be realigned to suit the Naac criteria. Once this prototype is developed the success will be tested only on its implementation but this research paper will be the first step towards developing this tool and will also initiate the success by developing a questionnaire and getting and evaluating the responses for moving to the next level of actual implementation

Keywords: balanced scorecard, bench marking, Naac, UGC

Procedia PDF Downloads 253
10634 A Typology System to Diagnose and Evaluate Environmental Affordances

Authors: Falntina Ahmad Alata, Natheer Abu Obeid

Abstract:

This paper is a research report of an experimental study on a proposed typology system to diagnose and evaluate the affordances of varying architectural environments. The study focused on architectural environments which have been developed with a shift in their use of adaptive reuse. The novelty in the newly developed environments was tested in terms of human responsiveness and interaction using a variety of selected cases. The study is a follow-up on previous research by the same authors, in which a typology of 16 categories of environmental affordances was developed and introduced. The current study introduced other new categories, which together with the previous ones establish what could be considered a basic language of affordance typology. The experiment was conducted on ten architectural environments while adopting two processes: 1. Diagnostic process, in which the environments were interpreted in terms of their affordances using the previously developed affordance typology, 2. The evaluation process, in which the diagnosed environments were evaluated using measures of emotional experience and architectural evaluation criteria of beauty, economy and function. The experimental study demonstrated that the typology system was capable of diagnosing different environments in terms of their affordances. It also introduced new categories of human interaction: “multiple affordances,” “conflict affordances,” and “mix affordances.” The different possible combinations and mixtures of categories demonstrated to be capable of producing huge numbers of other newly developed categories. This research is an attempt to draw a roadmap for designers to diagnose and evaluate the affordances within different architectural environments. It is hoped to provide future guidance for developing the best possible adaptive reuse according to the best affordance category within their proposed designs.

Keywords: affordance theory, affordance categories, architectural environments, architectural evaluation criteria, adaptive reuse environment, emotional experience, shift in use environment

Procedia PDF Downloads 170
10633 E-Learning Recommender System Based on Collaborative Filtering and Ontology

Authors: John Tarus, Zhendong Niu, Bakhti Khadidja

Abstract:

In recent years, e-learning recommender systems has attracted great attention as a solution towards addressing the problem of information overload in e-learning environments and providing relevant recommendations to online learners. E-learning recommenders continue to play an increasing educational role in aiding learners to find appropriate learning materials to support the achievement of their learning goals. Although general recommender systems have recorded significant success in solving the problem of information overload in e-commerce domains and providing accurate recommendations, e-learning recommender systems on the other hand still face some issues arising from differences in learner characteristics such as learning style, skill level and study level. Conventional recommendation techniques such as collaborative filtering and content-based deal with only two types of entities namely users and items with their ratings. These conventional recommender systems do not take into account the learner characteristics in their recommendation process. Therefore, conventional recommendation techniques cannot make accurate and personalized recommendations in e-learning environment. In this paper, we propose a recommendation technique combining collaborative filtering and ontology to recommend personalized learning materials to online learners. Ontology is used to incorporate the learner characteristics into the recommendation process alongside the ratings while collaborate filtering predicts ratings and generate recommendations. Furthermore, ontological knowledge is used by the recommender system at the initial stages in the absence of ratings to alleviate the cold-start problem. Evaluation results show that our proposed recommendation technique outperforms collaborative filtering on its own in terms of personalization and recommendation accuracy.

Keywords: collaborative filtering, e-learning, ontology, recommender system

Procedia PDF Downloads 350
10632 Language and Culture Exchange: Tandem Language Learning for University Students

Authors: Hebe Wong, Luz Fernandez Calventos

Abstract:

Tandem language learning, a language exchange process based on the principles of autonomy and reciprocity, provides opportunities for interlocutors to learn each other’s language by communicating online or face-to-face. While much attention has been paid to the process and outcomes of tandem learning via email, little has been discussed about the effectiveness of face-to-face tandem learning on language and culture exchange for university students. The LACTS (Language and Culture Tandem Scheme), an 8-week project, was set up to study students’ perceptions of conducting tandem learning to assist their language and culture exchange. Students of both post-graduate and undergraduate programmes (N=103) from a Hong Kong SAR university were put in groups of 4 to 6 according to their availability and language preferences and met for an hour a week. While sample task sheets on a range of topics were provided to assist the language exchange, all groups were encouraged to take charge of their meeting format and choose their own topics. At the end of the project, a 19-item questionnaire, which included both open-and closed-ended questions investigating students’ perceptions of reciprocal teaching and cultural exchange, was administered. Thirty-minute individual interviews were conducted to elicit students’ views and experiences in the LACTS activities. Quantitative and qualitative data analysis showed that most students agreed that the project had enhanced their cultural awareness and helped create an inclusive and participatory learning environment. Significant differences were found in students’ confidence in speaking their targeted language after joining the scheme. The interviews also provided rich data on the variety of formats and leadership patterns in student-led meetings, which could shed light on student autonomy and future tandem language learning projects.

Keywords: autonomy, reciprocity, tandem language learning, university students

Procedia PDF Downloads 38
10631 Circular Nitrogen Removal, Recovery and Reuse Technologies

Authors: Lina Wu

Abstract:

The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and threatens water quality. Nitrogen pollution control has become a global concern. The concentration of nitrogen in water is reduced by converting ammonia nitrogen, nitrate nitrogen and nitrite nitrogen into nitrogen-containing gas through biological treatment, physicochemical treatment and oxidation technology. However, some wastewater containing high ammonia nitrogen including landfill leachate, is difficult to be treated by traditional nitrification and denitrification because of its high COD content. The core process of denitrification is that denitrifying bacteria convert nitrous acid produced by nitrification into nitrite under anaerobic conditions. Still, its low-carbon nitrogen does not meet the conditions for denitrification. Many studies have shown that the natural autotrophic anammox bacteria can combine nitrous and ammonia nitrogen without a carbon source through functional genes to achieve total nitrogen removal, which is very suitable for removing nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The short-range nitrification and denitrification coupled with anaerobic ammoX ensures total nitrogen removal. It improves the removal efficiency, meeting the needs of society for an ecologically friendly and cost-effective nutrient removal treatment technology. In recent years, research has found that the symbiotic system has more water treatment advantages because this process not only helps to improve the efficiency of wastewater treatment but also allows carbon dioxide reduction and resource recovery. Microalgae use carbon dioxide dissolved in water or released through bacterial respiration to produce oxygen for bacteria through photosynthesis under light, and bacteria, in turn, provide metabolites and inorganic carbon sources for the growth of microalgae, which may lead the algal bacteria symbiotic system save most or all of the aeration energy consumption. It has become a trend to make microalgae and light-avoiding anammox bacteria play synergistic roles by adjusting the light-to-dark ratio. Microalgae in the outer layer of light particles block most of the light and provide cofactors and amino acids to promote nitrogen removal. In particular, myxoccota MYX1 can degrade extracellular proteins produced by microalgae, providing amino acids for the entire bacterial community, which helps anammox bacteria save metabolic energy and adapt to light. As a result, initiating and maintaining the process of combining dominant algae and anaerobic denitrifying bacterial communities has great potential in treating landfill leachate. Chlorella has a brilliant removal effect and can withstand extreme environments in terms of high ammonia nitrogen, high salt and low temperature. It is urgent to study whether the algal mud mixture rich in denitrifying bacteria and chlorella can greatly improve the efficiency of landfill leachate treatment under an anaerobic environment where photosynthesis is stopped. The optimal dilution concentration of simulated landfill leachate can be found by determining the treatment effect of the same batch of bacteria and algae mixtures under different initial ammonia nitrogen concentrations and making a comparison. High-throughput sequencing technology was used to analyze the changes in microbial diversity, related functional genera and functional genes under optimal conditions, providing a theoretical and practical basis for the engineering application of novel bacteria-algae symbiosis system in biogas slurry treatment and resource utilization.

Keywords: nutrient removal and recovery, leachate, anammox, Partial nitrification, Algae-bacteria interaction

Procedia PDF Downloads 25
10630 The Relationship of Creativity and Innovation in Artistic Work and Their Importance in Improving the Artistic Organizational Performance

Authors: Houyem Kotti

Abstract:

The development in societies requires that these societies are continuously changing in various aspects, a change that requires continuous adaptation to the data of the technical age. In order for the individual to perform his/her duty or task in a perfect way, it is necessary to provide all the basic requirements and necessities to increase the efficiency and effectiveness of the personnel working to accomplish their tasks, requirements, and work successfully. The success of the industries and organizations are linked to the need to create individuals in the creative and innovative field. Formation process is considered an economic development and social prosperity, and to improve the quantity and quality of artistic work. Therefore, creativity and innovation play an important role in improving the performance of the artistic organization as it is one of the variables affecting the organization's ability to grow and invest. In order to provide better services to their customers, especially in the face of competition and traditional methods of work, and in an environment that discourages and hinders creativity and impairs any process of development, change or creative behavior. The research methodology that will be performed for this study is described as qualitative by conducting several interviews with artistic people, experts in the artistic field and reviewing the related literature to collect the necessary and required qualitative data from secondary sources such as statistical reports, previous research studies, etc. In this research, we will attempt to clarify the relationship between innovation and its importance in the artistic organization, the conditions of achieving innovation and its constraints, barriers, and challenges. The creativity and innovation and their impacts on the performance of artistic organizations, explaining this mechanism, so as to ensure continuity of these organizations and keeping pace with developments in the global economic environment.

Keywords: artistic work, creativity and innovation, artistic organization, performance

Procedia PDF Downloads 236
10629 The Study of Seed Coating Effects on Germination Speed of Astragalus Adscendens under Different Moisture Conditions and Planting Depth in the Boroujerd Region

Authors: Hamidreza Mehrabi, Mandana Rezayee

Abstract:

The coated seed process is from amplifier ways that stick various materials on the outer surface of the seeds that minimize the negative environmental effects and increase the ability of Plant establishment. This study was done to assess the effects of coated seed on the germination speed of Astragalus adscendens in different conditions of drought stress and planting depth as it was conducted with a completely randomized factorial design with four replications. treatments of covering material was used in Four non coating levels (NC), mineral-based coating (CC), organic - based coating (OC) hydro gel-based coating (HC) ; treatment of moisture percent used in three levels of dried soil content, treatments of planting depth in two surfaces of planting and three times of the seed diameter was 9%, 14% and 21 % respectively. During the test, it was evaluated the germination speed attribute. The main results showed that moisture treatments and planting depth at a surface of 1% (P <0/01) was significant and has no significant effect of treatment materials. Also, In examining of the interaction between type of covering material and soil moisture were not observed significant differences for germination speed between covering treatments and controls covering, but there was a significant difference between treatments in 9% and 21%. Although in examining the triple interaction, increasing moisture and planting depth enhanced the speed of germination process, but it was not significant statistically, while it has made important differences in terms of description; because it had not growth in the moisture level of 9% and shallow cultivation (high stress). However, treatment of covered materials growth has developed significantly, so it can be useful in enhancing plant performance.

Keywords: seed coating, soil moisture, sowing depth, germination percentage

Procedia PDF Downloads 254
10628 Study of Microbial Diversity Associated with Tarballs and Their Exploitation in Crude Oil Degradation

Authors: Varsha Shinde, Belle Damodara Shenoy

Abstract:

Tarballs are crude oil remnants found in oceans after long term weathering process and are a global concern since several decades as potential marine pollutant. Being complicated in structure microbial remediation of tarballs in natural environment is a slow process. They are rich in high molecular weight alkanes and poly aromatic hydrocarbons which are resistant to microbial attack and other environmental factors, therefore remain in environment for long time. However, it has been found that many bacteria and fungi inhabit on tarballs for nutrients and shelter. Many of them are supposed to be oil degraders, while others are supposed to be getting benefited by byproducts formed during hydrocarbon metabolism. Thus tarballs are forming special interesting ecological niche of microbes. This work aimed to study diversity of bacteria and fungi from tarballs and to see their potential application in crude oil degradation. The samples of tarballs were collected from Betul beach of south Goa (India). Different methods were used to isolate culturable fraction of bacteria and fungi from it. Those were sequenced for 16S rRNA gene and ITS for molecular level identification. The 16S rRNA gene sequence analysis revealed the presence of 13 bacterial genera/clades (Alcanivorax, Brevibacterium, Bacillus, Cellulomonas, Enterobacter, Klebsiella, Marinobacter, Nitratireductor, Pantoea, Pseudomonas, Pseudoxanthomonas, Tistrella and Vibrio), while the ITS sequence analysis placed the fungi in 8 diverse genera/ clades (Aspergillus, Byssochlamys, Monascus, Paecilomyces, Penicillium, Scytalidium/ Xylogone, Talaromyces and Trichoderma). All bacterial isolates were screened for oil degradation capacity. Potential strains were subjected to crude oil degradation experiment for quantification. Results were analyzed by GC-MS-MS.

Keywords: bacteria, biodegradation, crude oil, diversity, fungi, tarballs

Procedia PDF Downloads 207
10627 The Application of Participatory Social Media in Collaborative Planning: A Systematic Review

Authors: Yujie Chen , Zhen Li

Abstract:

In the context of planning transformation, how to promote public participation in the formulation and implementation of collaborative planning has been the focused issue of discussion. However, existing studies have often been case-specific or focused on a specific design field, leaving the role of participatory social media (PSM) in urban collaborative planning generally questioned. A systematic database search was conducted in December 2019. Articles and projects were eligible if they reported a quantitative empirical study applying participatory social media in the collaborative planning process (a prospective, retrospective, experimental, longitudinal research, or collective actions in planning practices). Twenty studies and seven projects were included in the review. Findings showed that social media are generally applied in public spatial behavior, transportation behavior, and community planning fields, with new technologies and new datasets. PSM has provided a new platform for participatory design, decision analysis, and collaborative negotiation most widely used in participatory design. Findings extracted several existing forms of PSM. PSM mainly act as three roles: the language of decision-making for communication, study mode for spatial evaluation, and decision agenda for interactive decision support. Three optimization content of PSM were recognized, including improving participatory scale, improvement of the grass-root organization, and promotion of politics. However, basically, participants only could provide information and comment through PSM in the future collaborative planning process, therefore the issues of low data response rate, poor spatial data quality, and participation sustainability issues worth more attention and solutions.

Keywords: participatory social media, collaborative planning, planning workshop, application mode

Procedia PDF Downloads 112
10626 The Impact of the New Head Injury Pathway on the Number of CTs Performed in a Paediatric Population

Authors: Amel M. A. Osman, Roy Mahony, Lisa Dann, McKenna S.

Abstract:

Background: Computed Tomography (CT) is a significant source of radiation in the pediatric population. A new head injury (HI) pathway was introduced in 2021, which altered the previous process of HI being jointly admitted with general pediatrics and surgery to admit these patients under the Emergency Medicine Team. Admitted patients included those with positive CT findings not requiring immediate neurosurgical intervention and those who did not meet current criteria for urgent CT brain as per NICE guidelines but were still symptomatic for prolonged observations. This approach aims to decrease the number of CT scans performed. The main aim is to assess the variation in CT scanning rates since the change in the admitting process. A retrospective review of patients presenting to CHI PECU with HI over 6-month period (01/01/19-31/05/19) compared to a 6-month period post introduction of the new pathway (01/06/2022-31/12/2022). Data was collected from the electronic record databases, symphony, and PACS. Results: In 2019, there were 869 presentations of HI, among which 32 (3.68%) had CT scans performed. 2 (6.25%) of those scanned had positive findings. In 2022, there were 1122 HI presentations, with 47 (4.19%) CT scans performed and positive findings in 5 (10.6%) cases. 57 patients were admitted under the new pathway for observation, with 1 having a CT scan following admission. Conclusion: Quantitative lifetime radiation risks for children are not negligible. While there was no statistically significant reduction in CTs performed amongst HIs presenting to our department, a significant group met the criteria for admission under the PECU consultant for prolonged monitoring. There was also a greater proportion of abnormalities on CT scans performed in 2022, demonstrating improved patient selection for imaging. Further data analysis is ongoing to determine if those who were admitted would have previously been scanned under the old pathway.

Keywords: head injury, CT, admission, guidline

Procedia PDF Downloads 36
10625 The Conceptual Exploration of Comfort Zone by Using Content Analysis

Authors: Lilla Szabó Hangya, Szilvia Jambori

Abstract:

The comfort zone is less studied area in the field of psychology. One of the most important definitions is that comfort zone is a psychological state in which things feel familiar to a person with low level of anxiety and stress. But the validity of comfort zone does not confirm till now. The aim of our pilot research is to test which psychological factors could determine how young adults behave during their decision process to stay in one’s comfort zone or to leave it. Every person has a number of comfort zones, so we are not able to measure it directly, only those personality traits which predict if someone leaves his comfort zone easier or harder. In our study at first we wanted to clarify the meaning of comfort zone. 110 young adults (male: 37, female: 73; ages from 18 to 70, average age: 26,6) took part in the study. Beside their demographic datas we asked them what does the comfort zone mean for them. The results showed that the meaning of the comfort zone can be grouped in five dimensions: comfort (49,6 %), leaving it-change (8,1%), ambivalent feelings (10,6%), related to other people (10,6%), pursuit of self-realization (16,8%). Our results demonstrated age related characteristics. For young people at the age of 19 the comfort zone is related to other people, because during adolescents peer relationships become more important. Subjects at the age 20-30 answered that the comfort zone means comfort and stability for them. Their life becomes stable for a while, they are studying or working. But at the age of 25, when they finish university, most of them answered comfort zone means a changing process for them. On the other hand for subjects at the age of 27 the means of the comfort zone is pursuit of self-realization. After that period at the age of 31 when they have families and stable job the stability will also dominant. We saw that the comfort zone has much more meaning besides a pleasant psychological trait. Further we would like to determine which psychological factors relate to comfort zone, and what kind of personality traits could predict leaving or staying in one’s comfort zone. We want to observe the relationship between comfort zone and subjective well-being, life satisfaction self-efficacy or self-esteem.

Keywords: comfort zone, development, personality trait, young adults

Procedia PDF Downloads 321
10624 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms

Authors: Dimitrios Kafetzopoulos

Abstract:

Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.

Keywords: incremental operational changes, radical operational changes, efficiency, sustainability

Procedia PDF Downloads 119
10623 Thermal Method Production of the Hydroxyapatite from Bone By-Products from Meat Industry

Authors: Agnieszka Sobczak-Kupiec, Dagmara Malina, Klaudia Pluta, Wioletta Florkiewicz, Bozena Tyliszczak

Abstract:

Introduction: Request for compound of phosphorus grows continuously, thus, it is searched for alternative sources of this element. One of these sources could be by-products from meat industry which contain prominent quantity of phosphorus compounds. Hydroxyapatite, which is natural component of animal and human bones, is leading material applied in bone surgery and also in stomatology. This is material, which is biocompatible, bioactive and osteoinductive. Methodology: Hydroxyapatite preparation: As a raw material was applied deproteinized and defatted bone pulp called bone sludge, which was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Hydroxyapatite was received in calcining process in chamber kiln with electric heating in air atmosphere in two stages. In the first stage, material was calcining in temperature 600°C within 3 hours. In the next stage unified material was calcining in three different temperatures (750°C, 850°C and 950°C) keeping material in maximum temperature within 3.0 hours. Bone sludge: Bone sludge was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Pork bones coming from the partition of meat were used as a raw material for the production of the protein hydrolysate. After disintegration, a mixture of bone pulp and water with a small amount of lactic acid was boiled at temperature 130-135°C and under pressure4 bar. After 3-3.5 hours boiled-out bones were separated on a sieve, and the solution of protein-fat hydrolysate got into a decanter, where bone sludge was separated from it. Results of the study: The phase composition was analyzed by roentgenographic method. Hydroxyapatite was the only crystalline phase observed in all the calcining products. XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Conclusion: The researches were shown that phosphorus content is around 12%, whereas, calcium content amounts to 28% on average. The conducted researches on bone-waste calcining at the temperatures of 750-950°C confirmed that thermal utilization of deproteinized bone-waste was possible. X-ray investigations were confirmed that hydroxyapatite is the main component of calcining products, and also XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Contents of calcium and phosphorus were distinctly increased with calcining temperature, whereas contents of phosphorus soluble in acids were decreased. It could be connected with higher crystallization degree of material received in higher temperatures and its stable structure. Acknowledgements: “The authors would like to thank the The National Centre for Research and Development (Grant no: LIDER//037/481/L-5/13/NCBR/2014) for providing financial support to this project”.

Keywords: bone by-products, bone sludge, calcination, hydroxyapatite

Procedia PDF Downloads 270
10622 An Efficient Approach for Shear Behavior Definition of Plant Stalk

Authors: M. R. Kamandar, J. Massah

Abstract:

The information of the impact cutting behavior of plants stalk plays an important role in the design and fabrication of plants cutting equipment. It is difficult to investigate a theoretical method for defining cutting properties of plants stalks because the cutting process is complex. Thus, it is necessary to set up an experimental approach to determine cutting parameters for a single stalk. To measure the shear force, shear energy and shear strength of plant stalk, a special impact cutting tester was fabricated. It was similar to an Izod impact cutting tester for metals but a cutting blade and data acquisition system were attached to the end of pendulum's arm. The apparatus was included four strain gages and a digital indicator to show the real-time cutting force of plant stalk. To measure the shear force and also testing the apparatus, two plants’ stalks, like buxus and privet, were selected. The samples (buxus and privet stalks) were cut under impact cutting process at four loading rates 1, 2, 3 and 4 m.s-1 and three internodes fifth, tenth and fifteenth by the apparatus. At buxus cutting analysis: the minimum value of cutting energy was obtained at fifth internode and loading rate 4 m.s-1 and the maximum value of shear energy was obtained at fifteenth internode and loading rate 1 m.s-1. At privet cutting analysis: the minimum value of shear consumption energy was obtained at fifth internode and loading rate: 4 m.s-1 and the maximum value of shear energy was obtained at fifteenth internode and loading rate: 1 m.s-1. The statistical analysis at both plants showed that the increase of impact cutting speed would decrease the shear consumption energy and shear strength. In two scenarios, the results showed that with increase the cutting speed, shear force would decrease.

Keywords: Buxus, Privet, impact cutting, shear energy

Procedia PDF Downloads 107
10621 Performance Evaluation of Polyethyleneimine/Polyethylene Glycol Functionalized Reduced Graphene Oxide Membranes for Water Desalination via Forward Osmosis

Authors: Mohamed Edokali, Robert Menzel, David Harbottle, Ali Hassanpour

Abstract:

Forward osmosis (FO) process has stood out as an energy-efficient technology for water desalination and purification, although the practical application of FO for desalination still relies on RO-based Thin Film Composite (TFC) and Cellulose Triacetate (CTA) polymeric membranes which have a low performance. Recently, graphene oxide (GO) laminated membranes have been considered an ideal selection to overcome the bottleneck of the FO-polymeric membranes owing to their simple fabrication procedures, controllable thickness and pore size and high water permeability rates. However, the low stability of GO laminates in wet and harsh environments is still problematic. The recent developments of modified GO and hydrophobic reduced graphene oxide (rGO) membranes for FO desalination have demonstrated attempts to overcome the ongoing trade-off between desalination performance and stability, which is yet to be achieved prior to the practical implementation. In this study, acid-functionalized GO nanosheets cooperatively reduced and crosslinked by the hyperbranched polyethyleneimine (PEI) and polyethylene glycol (PEG) polymers, respectively, are applied for fabrication of the FO membrane, to enhance the membrane stability and performance, and compared with other functionalized rGO-FO membranes. PEI/PEG doped rGO membrane retained two compacted d-spacings (0.7 and 0.31 nm) compared to the acid-functionalized GO membrane alone (0.82 nm). Besides increasing the hydrophilicity, the coating layer of PEG onto the PEI-doped rGO membrane surface enhanced the structural integrity of the membrane chemically and mechanically. As a result of these synergetic effects, the PEI/PEG doped rGO membrane exhibited a water permeation of 7.7 LMH, salt rejection of 97.9 %, and reverse solute flux of 0.506 gMH at low flow rates in the FO desalination process.

Keywords: desalination, forward osmosis, membrane performance, polyethyleneimine, polyethylene glycol, reduced graphene oxide, stability

Procedia PDF Downloads 80
10620 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0

Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini

Abstract:

Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.

Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling

Procedia PDF Downloads 78
10619 Using the Yield-SAFE Model to Assess the Impacts of Climate Change on Yield of Coffee (Coffea arabica L.) Under Agroforestry and Monoculture Systems

Authors: Tesfay Gidey Bezabeh, Tânia Sofia Oliveira, Josep Crous-Duran, João H. N. Palma

Abstract:

Ethiopia's economy depends strongly on Coffea arabica production. Coffee, like many other crops, is sensitive to climate change. An urgent development and application of strategies against the negative impacts of climate change on coffee production is important. Agroforestry-based system is one of the strategies that may ensure sustainable coffee production amidst the likelihood of future impacts of climate change. This system involves the combination of trees in buffer extremes, thereby modifying microclimate conditions. This paper assessed coffee production under 1) coffee monoculture and 2) coffee grown using an agroforestry system, under a) current climate and b) two different future climate change scenarios. The study focused on two representative coffee-growing regions of Ethiopia under different soil, climate, and elevation conditions. A process-based growth model (Yield-SAFE) was used to simulate coffee production for a time horizon of 40 years. Climate change scenarios considered were representative concentration pathways (RCP) 4.5 and 8.5. The results revealed that in monoculture systems, the current coffee yields are between 1200-1250 kg ha⁻¹ yr⁻¹, with an expected decrease between 4-38% and 20-60% in scenarios RCP 4.5 and 8.5, respectively. However, in agroforestry systems, the current yields are between 1600-2200 kg ha⁻¹ yr⁻¹; the decrease was lower, ranging between 4-13% and 16-25% in RCP 4.5 and 8.5 scenarios, respectively. From the results, it can be concluded that coffee production under agroforestry systems has a higher level of resilience when facing future climate change and reinforces the idea of using this type of management in the near future for adapting climate change's negative impacts on coffee production.

Keywords: Albizia gummifera, CORDEX, Ethiopia, HADCM3 model, process-based model

Procedia PDF Downloads 89
10618 Amazonian Native Biomass Residue for Sustainable Development of Isolated Communities

Authors: Bruna C. Brasileiro, José Alberto S. Sá, Brigida R. P. Rocha

Abstract:

The Amazon region development was related to large-scale projects associated with economic cycles. Economic cycles were originated from policies implemented by successive governments that exploited the resources and have not yet been able to improve the local population's quality of life. These implanted development strategies were based on vertical planning centered on State that didn’t know and showed no interest in know the local needs and potentialities. The future of this region is a challenge that depends on a model of development based on human progress associated to intelligent, selective and environmentally safe exploitation of natural resources settled in renewable and no-polluting energy generation sources – a differential factor of attraction of new investments in a context of global energy and environmental crisis. In this process the planning and support of Brazilian State, local government, and selective international partnership are essential. Residual biomass utilization allows the sustainable development by the integration of production chain and energy generation process which could improve employment condition and income of riversides. Therefore, this research discourses how the use of local residual biomass (açaí lumps) could be an important instrument of sustainable development for isolated communities located at Alcobaça Sustainable Development Reserve (SDR), Tucuruí, Pará State, since in this region the energy source more accessible for who can pay are the fossil fuels that reaches about 54% of final energy consumption by the integration between the açaí productive chain and the use of renewable energy source besides it can promote less environmental impact and decrease the use of fossil fuels and carbon dioxide emissions.

Keywords: Amazon, biomass, renewable energy, sustainability

Procedia PDF Downloads 294
10617 Navigating the Assessment Landscape in English Language Teaching: Strategies, Challengies and Best Practices

Authors: Saman Khairani

Abstract:

Assessment is a pivotal component of the teaching and learning process, serving as a critical tool for evaluating student progress, diagnosing learning needs, and informing instructional decisions. In the context of English Language Teaching (ELT), effective assessment practices are essential to promote meaningful learning experiences and foster continuous improvement in language proficiency. This paper delves into various assessment strategies, explores associated challenges, and highlights best practices for assessing student learning in ELT. The paper begins by examining the diverse forms of assessment, including formative assessments that provide timely feedback during the learning process and summative assessments that evaluate overall achievement. Additionally, alternative methods such as portfolios, self-assessment, and peer assessment play a significant role in capturing various aspects of language learning. Aligning assessments with learning objectives is crucial. Educators must ensure that assessment tasks reflect the desired language skills, communicative competence, and cultural awareness. Validity, reliability, and fairness are essential considerations in assessment design. Challenges in assessing language skills—such as speaking, listening, reading, and writing—are discussed, along with practical solutions. Constructive feedback, tailored to individual learners, guides their language development. In conclusion, this paper synthesizes research findings and practical insights, equipping ELT practitioners with the knowledge and tools necessary to design, implement, and evaluate effective assessment practices. By fostering meaningful learning experiences, educators contribute significantly to learners’ language proficiency and overall success.

Keywords: ELT, formative, summative, fairness, validity, reliability

Procedia PDF Downloads 39
10616 A Simple Chemical Approach to Regenerating Strength of Thermally Recycled Glass Fibre

Authors: Sairah Bashir, Liu Yang, John Liggat, James Thomason

Abstract:

Glass fibre is currently used as reinforcement in over 90% of all fibre-reinforced composites produced. The high rigidity and chemical resistance of these composites are required for optimum performance but unfortunately results in poor recyclability; when such materials are no longer fit for purpose, they are frequently deposited in landfill sites. Recycling technologies, for example, thermal treatment, can be employed to address this issue; temperatures typically between 450 and 600 °C are required to allow degradation of the rigid polymeric matrix and subsequent extraction of fibrous reinforcement. However, due to the severe thermal conditions utilised in the recycling procedure, glass fibres become too weak for reprocessing in second-life composite materials. In addition, more stringent legislation is being put in place regarding disposal of composite waste, and so it is becoming increasingly important to develop long-term recycling solutions for such materials. In particular, the development of a cost-effective method to regenerate strength of thermally recycled glass fibres will have a positive environmental effect as a reduced volume of composite material will be destined for landfill. This research study has demonstrated the positive impact of sodium hydroxide (NaOH) and potassium hydroxide (KOH) solution, prepared at relatively mild temperatures and at concentrations of 1.5 M and above, on the strength of heat-treated glass fibres. As a result, alkaline treatments can potentially be implemented to glass fibres that are recycled from composite waste to allow their reuse in second-life materials. The optimisation of the strength recovery process is being conducted by varying certain reaction parameters such as molarity of alkaline solution and treatment time. It is believed that deep V-shaped surface flaws exist commonly on severely damaged fibre surfaces and are effectively removed to form smooth, U-shaped structures following alkaline treatment. Although these surface flaws are believed to be present on glass fibres they have not in fact been observed, however, they have recently been discovered in this research investigation through analytical techniques such as AFM (atomic force microscopy) and SEM (scanning electron microscopy). Reaction conditions such as molarity of alkaline solution affect the degree of etching of the glass fibre surface, and therefore the extent to which fibre strength is recovered. A novel method in determining the etching rate of glass fibres after alkaline treatment has been developed, and the data acquired can be correlated with strength. By varying reaction conditions such as alkaline solution temperature and molarity, the activation energy of the glass etching process and the reaction order can be calculated respectively. The promising results obtained from NaOH and KOH treatments have opened an exciting route to strength regeneration of thermally recycled glass fibres, and the optimisation of the alkaline treatment process is being continued in order to produce recycled fibres with properties that match original glass fibre products. The reuse of such glass filaments indicates that closed-loop recycling of glass fibre reinforced composite (GFRC) waste can be achieved. In fact, the development of a closed-loop recycling process for GFRC waste is already underway in this research study.

Keywords: glass fibers, glass strengthening, glass structure and properties, surface reactions and corrosion

Procedia PDF Downloads 233
10615 The Application of Enzymes on Pharmaceutical Products and Process Development

Authors: Reginald Anyanwu

Abstract:

Enzymes are biological molecules that significantly regulate the rate of almost all of the chemical reactions that take place within cells, and have been widely used for products’ innovations. They are vital for life and serve a wide range of important functions in the body, such as aiding in digestion and metabolism. The present study was aimed at finding out the extent to which biological molecules have been utilized by pharmaceutical, food and beverage, and biofuel industries in commercial and scale up applications. Taking into account the escalating business opportunities in this vertical, biotech firms have also been penetrating enzymes industry especially that of food. The aim of the study therefore was to find out how biocatalysis can be successfully deployed; how enzyme application can improve industrial processes. To achieve the purpose of the study, the researcher focused on the analytical tools that are critical for the scale up implementation of enzyme immobilization to ascertain the extent of increased product yield at minimum logistical burden and maximum market profitability on the environment and user. The researcher collected data from four pharmaceutical companies located at Anambra state and Imo state of Nigeria. Questionnaire items were distributed to these companies. The researcher equally made a personal observation on the applicability of these biological molecules on innovative Products since there is now shifting trends toward the consumption of healthy and quality food. In conclusion, it was discovered that enzymes have been widely used for products’ innovations but there are however variations on their applications. It was also found out that pivotal contenders of enzymes market have lately been making heavy investments in the development of innovative product solutions. It was recommended that the applications of enzymes on innovative products should be widely practiced.

Keywords: enzymes, pharmaceuticals, process development, quality food consumption, scale-up applications

Procedia PDF Downloads 128
10614 Effect of Naphtha in Addition to a Cycle Steam Stimulation Process Reducing the Heavy Oil Viscosity Using a Two-Level Factorial Design

Authors: Nora A. Guerrero, Adan Leon, María I. Sandoval, Romel Perez, Samuel Munoz

Abstract:

The addition of solvents in cyclic steam stimulation is a technique that has shown an impact on the improved recovery of heavy oils. In this technique, it is possible to reduce the steam/oil ratio in the last stages of the process, at which time this ratio increases significantly. The mobility of improved crude oil increases due to the structural changes of its components, which at the same time reflected in the decrease in density and viscosity. In the present work, the effect of the variables such as temperature, time, and weight percentage of naphtha was evaluated, using a factorial design of experiments 23. From the results of analysis of variance (ANOVA) and Pareto diagram, it was possible to identify the effect on viscosity reduction. The experimental representation of the crude-vapor-naphtha interaction was carried out in a batch reactor on a Colombian heavy oil of 12.8° API and 3500 cP. The conditions of temperature, reaction time, and percentage of naphtha were 270-300 °C, 48-66 hours, and 3-9% by weight, respectively. The results showed a decrease in density with values in the range of 0.9542 to 0.9414 g/cm³, while the viscosity decrease was in the order of 55 to 70%. On the other hand, simulated distillation results, according to ASTM 7169, revealed significant conversions of the 315°C+ fraction. From the spectroscopic techniques of nuclear magnetic resonance NMR, infrared FTIR and UV-VIS visible ultraviolet, it was determined that the increase in the performance of the light fractions in the improved crude is due to the breakdown of alkyl chains. The methodology for cyclic steam injection with naphtha and laboratory-scale characterization can be considered as a practical tool in improved recovery processes.

Keywords: viscosity reduction, cyclic steam stimulation, factorial design, naphtha

Procedia PDF Downloads 153
10613 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem

Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães

Abstract:

This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.

Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart

Procedia PDF Downloads 154
10612 Effect of Supplementation of Hay with Noug Seed Cake (Guizotia abyssinica), Wheat Bran and Their Mixtures on Feed Utilization, Digestiblity and Live Weight Change in Farta Sheep

Authors: Fentie Bishaw Wagayie

Abstract:

This study was carried out with the objective of studying the response of Farta sheep in feed intake and live weight change when fed on hay supplemented with noug seed cake (NSC), wheat bran (WB), and their mixtures. The digestibility trial of 7 days and 90 days of feeding trial was conducted using 25 intact male Farta sheep with a mean initial live weight of 16.83 ± 0.169 kg. The experimental animals were arranged randomly into five blocks based on the initial live weight, and the five treatments were assigned randomly to each animal in a block. Five dietary treatments used in the experiment comprised of grass hay fed ad libitum (T1), grass hay ad libitum + 300 g DM WB (T2), grass hay ad libitum + 300 g DM (67% WB: 33% NSC mixture) (T3), grass hay ad libitum + 300 g DM (67% NSC: 33% WB) (T4) and 300 g DM/ head/day NSC (T5). Common salt and water were offered ad libitum. The supplements were offered twice daily at 0800 and 1600 hours. The experimental sheep were kept in individual pens. Supplementation of NSC, WB, and their mixtures significantly increased (p < 0.01) the total dry matter (DM) (665.84-788 g/head/day) and (p < 0.001) crude protein (CP) intake. Unsupplemented sheep consumed significantly higher (p < 0.01) grass hay DM (540.5g/head/day) as compared to the supplemented treatments (365.8-488 g/h/d), except T2. Among supplemented sheep, T5 had significantly higher (p < 0.001) CP intake (99.98 g/head/day) than the others (85.52-90.2 g/head/day). Supplementation significantly improved (p < 0.001) the digestibility of CP (66.61-78.9%), but there was no significant effect (p > 0.05) on DM, OM, NDF, and ADF digestibility between supplemented and control treatments. Very low CP digestibility (11.55%) observed in the basal diet (grass hay) used in this study indicated that feeding sole grass hay could not provide nutrients even for the maintenance requirement of growing sheep. Significant final and daily live weight gain (p < 0.001) in the range of 70.11-82.44 g/head/day was observed in supplemented Farta sheep, but unsupplemented sheep lost weight by 9.11g/head/day. Numerically, among the supplemented treatments, sheep supplemented with a higher proportion of NSC in T4 (201 NSC + 99 g WB) gained more weight than the rest, though not statistically significant (p > 0.05). The absence of statistical difference in daily body weight gain between all supplemented sheep indicated that the supplementation of NSC, WB, and their mixtures had similar potential to provide nutrients. Generally, supplementation of NSC, WB, and their mixtures to the basal grass hay diet improved feed conversion ratio, total DM intake, CP intake, and CP digestibility, and it also improved the growth performance with a similar trend for all supplemented Farta sheep over the control group. Therefore, from a biological point of view, to attain the required level of slaughter body weight within a short period of the growing program, sheep producer can use all the supplement types depending upon their local availability, but in the order of priority, T4, T5, T3, and T2, respectively. However, based on partial budget analysis, supplementation of 300 g DM/head /day NSC (T5) could be recommended as profitable for producers with no capital limitation, whereas T4 supplementation (201 g NSC + 99 WB DM/day) is recommended when there is capital scarcity.

Keywords: weight gain, supplement, Farta sheep, hay as basal diet

Procedia PDF Downloads 43
10611 Impact of Map Generalization in Spatial Analysis

Authors: Lin Li, P. G. R. N. I. Pussella

Abstract:

When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.

Keywords: generalization, GIS, scales, spatial analysis

Procedia PDF Downloads 318
10610 A Web-Based Real Property Updating System for Efficient and Sustainable Urban Development: A Case Study in Ethiopia

Authors: Eyosiyas Aga

Abstract:

The development of information communication technology has transformed the paper-based mapping and land registration processes to a computerized and networked system. The computerization and networking of real property information system play a vital role in good governance and sustainable development of emerging countries through cost effective, easy and accessible service delivery for the customer. The efficient, transparent and sustainable real property system is becoming the basic infrastructure for the urban development thus improve the data management system and service delivery in the organizations. In Ethiopia, the real property administration is paper based as a result, it confronted problems of data management, illegal transactions, corruptions, and poor service delivery. In order to solve this problem and to facilitate real property market, the implementation of web-based real property updating system is crucial. A web-based real property updating is one of the automation (computerizations) methods to facilitate data sharing, reduce time and cost of the service delivery in real property administration system. In additions, it is useful for the integration of data onto different information systems and organizations. This system is designed by combining open source software which supported by open Geo-spatial consortium. The web-based system is mainly designed by using open source software with the help of open Geo-spatial Consortium. The Open Geo-spatial Consortium standards such as the Web Feature Service and Web Map Services are the most widely used standards to support and improves web-based real property updating. These features allow the integration of data from different sources, and it can be used to maintain consistency of data throughout transactions. The PostgreSQL and Geoserver are used to manage and connect a real property data to the flex viewer and user interface. The system is designed for both internal updating system (municipality); which is mainly updating of spatial and textual information, and the external system (customer) which focus on providing and interacting with the customer. This research assessed the potential of open source web applications and adopted this technology for real property updating system in Ethiopia through simple, cost effective and secured way. The system is designed by combining and customizing open source software to enhance the efficiency of the system in cost effective way. The existing workflow for real property updating is analyzed to identify the bottlenecks, and the new workflow is designed for the system. The requirement is identified through questionnaire and literature review, and the system is prototype for the study area. The research mainly aimed to integrate human resource with technology in designing of the system to reduce data inconsistency and security problems. In additions, the research reflects on the current situation of real property administration and contributions of effective data management system for efficient, transparent and sustainable urban development in Ethiopia.

Keywords: cadaster, real property, sustainable, transparency, web feature service, web map service

Procedia PDF Downloads 249
10609 An In-Depth Experimental Study of Wax Deposition in Pipelines

Authors: Arias M. L., D’Adamo J., Novosad M. N., Raffo P. A., Burbridge H. P., Artana G.

Abstract:

Shale oils are highly paraffinic and, consequently, can create wax deposits that foul pipelines during transportation. Several factors must be considered when designing pipelines or treatment programs that prevents wax deposition: including chemical species in crude oils, flowrates, pipes diameters and temperature. This paper describes the wax deposition study carried out within the framework of Y-TEC's flow assurance projects, as part of the process to achieve a better understanding on wax deposition issues. Laboratory experiments were performed on a medium size, 1 inch diameter, wax deposition loop of 15 mts long equipped with a solid detector system, online microscope to visualize crystals, temperature and pressure sensors along the loop pipe. A baseline test was performed with diesel with no paraffin or additive content. Tests were undertaken with different temperatures of circulating and cooling fluid at different flow conditions. Then, a solution formed with a paraffin added to the diesel was considered. Tests varying flowrate and cooling rate were again run. Viscosity, density, WAT (Wax Appearance Temperature) with DSC (Differential Scanning Calorimetry), pour point and cold finger measurements were carried out to determine physical properties of the working fluids. The results obtained in the loop were analyzed through momentum balance and heat transfer models. To determine possible paraffin deposition scenarios temperature and pressure loop output signals were studied. They were compared with WAT static laboratory methods. Finally, we scrutinized the effect of adding a chemical inhibitor to the working fluid on the dynamics of the process of wax deposition in the loop.

Keywords: paraffin desposition, flow assurance, chemical inhibitors, flow loop

Procedia PDF Downloads 91
10608 Inpatient Glycemic Management Strategies and Their Association with Clinical Outcomes in Hospitalized SARS-CoV-2 Patients

Authors: Thao Nguyen, Maximiliano Hyon, Sany Rajagukguk, Anna Melkonyan

Abstract:

Introduction: Type 2 Diabetes is a well-established risk factor for severe SARS-CoV-2 infection. Uncontrolled hyperglycemia in patients with established or newly diagnosed diabetes is associated with poor outcomes, including increased mortality and hospital length of stay. Objectives: Our study aims to compare three different glycemic management strategies and their association with clinical outcomes in patients hospitalized for moderate to severe SARS-CoV-2 infection. Identifying optimal glycemic management strategies will improve the quality of patient care and improve their outcomes. Method: This is a retrospective observational study on patients hospitalized at Adventist Health White Memorial with severe SARS-CoV-2 infection from 11/1/2020 to 02/28/2021. The following inclusion criteria were used: positive SARS-CoV-2 PCR test, age >18 yrs old, diabetes or random glucose >200 mg/dL on admission, oxygen requirement >4L/min, and treatment with glucocorticoids. Our exclusion criteria included: ICU admission within 24 hours, discharge within five days, death within five days, and pregnancy. The patients were divided into three glycemic management groups: Group 1, managed solely by the Primary Team, Group 2, by Pharmacy; and Group 3, by Endocrinologist. Primary outcomes were average glucose on Day 5, change in glucose between Days 3 and 5, and average insulin dose on Day 5 among groups. Secondary outcomes would be upgraded to ICU, inpatient mortality, and hospital length of stay. For statistics, we used IBM® SPSS, version 28, 2022. Results: Most studied patients were Hispanic, older than 60, and obese (BMI >30). It was the first CV-19 surge with the Delta variant in an unvaccinated population. Mortality was markedly high (> 40%) with longer LOS (> 13 days) and a high ICU transfer rate (18%). Most patients had markedly elevated inflammatory markers (CRP, Ferritin, and D-Dimer). These, in combination with glucocorticoids, resulted in severe hyperglycemia that was difficult to control. Average glucose on Day 5 was not significantly different between groups primary vs. pharmacy vs. endocrine (220.5 ± 63.4 vs. 240.9 ± 71.1 vs. 208.6 ± 61.7 ; P = 0.105). Change in glucose from days 3 to 5 was not significantly different between groups but trended towards favoring the endocrinologist group (-26.6±73.6 vs. 3.8±69.5 vs. -32.2±84.1; P= 0.052). TDD insulin was not significantly different between groups but trended towards higher TDD for the endocrinologist group (34.6 ± 26.1 vs. 35.2 ± 26.4 vs. 50.5 ± 50.9; P=0.054). The endocrinologist group used significantly more preprandial insulin compared to other groups (91.7% vs. 39.1% vs. 65.9% ; P < 0.001). The pharmacy used more basal insulin than other groups (95.1% vs. 79.5% vs. 79.2; P = 0.047). There were no differences among groups in the clinical outcomes: LOS, ICU upgrade, or mortality. Multivariate regression analysis controlled for age, sex, BMI, HbA1c level, renal function, liver function, CRP, d-dimer, and ferritin showed no difference in outcomes among groups. Conclusion: Given high-risk factors in our population, despite efforts from the glycemic management teams, it’s unsurprising no differences in clinical outcomes in mortality and length of stay.

Keywords: glycemic management, strategies, hospitalized, SARS-CoV-2, outcomes

Procedia PDF Downloads 426
10607 Energy Production with Closed Methods

Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani

Abstract:

In Kosovo, the problem with the electricity supply is huge and does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime - Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product is obtained - gas, which passes through the carburetor, which enables the gas combustion process and puts into operation the internal combustion machine and the generator and produces electricity that does not release gases into the atmosphere. The obtained results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that: in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.

Keywords: energy, heating, atmosphere, waste, gasification

Procedia PDF Downloads 216
10606 Dialysis Access Surgery for Patients in Renal Failure: A 10-Year Institutional Experience

Authors: Daniel Thompson, Muhammad Peerbux, Sophie Cerutti, Hansraj Bookun

Abstract:

Introduction: Dialysis access is a key component of the care of patients with end stage renal failure. In our institution, a combined service of vascular surgeons and nephrologists are responsible for the creation and maintenance of arteriovenous fisultas (AVF), tenckhoff cathethers and Hickman/permcath lines. This poster investigates the last 10 years of dialysis access surgery conducted at St. Vincent’s Hospital Melbourne. Method: A cross-sectional retrospective analysis was conducted of patients of St. Vincent’s Hospital Melbourne (Victoria, Australia) utilising data collection from the Australasian Vascular Audit (Australian and New Zealand Society for Vascular Surgery). Descriptive demographic analysis was carried out as well as operation type, length of hospital stays, postoperative deaths and need for reoperation. Results: 2085 patients with renal failure were operated on between the years of 2011 and 2020. 1315 were male (63.1%) and 770 were female (36.9%). The mean age was 58 (SD 13.8). 92% of patients scored three or greater on the American Society of Anesthiologiests classification system. Almost half had a history of ischaemic heart disease (48.4%), more than half had a history of diabetes (64%), and a majority had hypertension (88.4%). 1784 patients had a creatinine over 150mmol/L (85.6%), the rest were on dialysis (14.4%). The most common access procedure was AVF creation, with 474 autologous AVFs and 64 prosthetic AVFs. There were 263 Tenckhoff insertions. We performed 160 cadeveric renal transplants. The most common location for AVF formation was brachiocephalic (43.88%) followed by radiocephalic (36.7%) and brachiobasilic (16.67%). Fistulas that required re-intervention were most commonly angioplastied (n=163), followed by thrombectomy (n=136). There were 107 local fistula repairs. Average length of stay was 7.6 days, (SD 12). There were 106 unplanned returns to theatre, most commonly for fistula creation, insertion of tenckhoff or permacath removal (71.7%). There were 8 deaths in the immediately postoperative period. Discussion: Access to dialysis is vital for patients with end stage kidney disease, and requires a multidisciplinary approach from both nephrologists, vascular surgeons, and allied health practitioners. Our service provides a variety of dialysis access methods, predominately fistula creation and tenckhoff insertion. Patients with renal failure are heavily comorbid, and prolonged hospital admission following surgery is a source of significant healthcare expenditure. AVFs require careful monitoring and maintenance for ongoing utility, and our data reflects a multitude of operations required to maintain usable access. The requirement for dialysis is growing worldwide and our data demonstrates a local experience in access, with preferred methods, common complications and the associated surgical interventions.

Keywords: dialysis, fistula, nephrology, vascular surgery

Procedia PDF Downloads 89