Search results for: third stage labour
616 CO₂ Capture by Membrane Applied to Steel Production Process
Authors: Alexandra-Veronica Luca, Letitia Petrescu
Abstract:
Steel production is a major contributor to global warming potential. An average value of 1.83 tons of CO₂ is emitted for every ton of steel produced, resulting in over 3.3 Mt of CO₂ emissions each year. The present paper is focused on the investigation and comparison of two O₂ separation methods and two CO₂ capture technologies applicable to iron and steel industry. The O₂ used in steel production comes from an Air Separation Unit (ASU) using distillation or from air separation using membranes. The CO₂ capture technologies are represented by a two-stage membrane separation process and the gas-liquid absorption using methyl di-ethanol amine (MDEA). Process modelling and simulation tools, as well as environmental tools, are used in the present study. The production capacity of the steel mill is 4,000,000 tones/year. In order to compare the two CO₂ capture technologies in terms of efficiency, performance, and sustainability, the following cases have been investigated: Case 1: steel production using O₂ from ASU and no CO₂ capture; Case 2: steel production using O₂ from ASU and gas-liquid absorption for CO₂ capture; Case 3: steel production using O₂ from ASU and membranes for CO₂ capture; Case 4: steel production using O₂ from membrane separation method and gas-liquid absorption for CO₂ capture and Case-5: steel production using membranes for air separation and CO₂ capture. The O₂ separation rate obtained in the distillation technology was about 96%, and about 33% in the membrane technology. Similarly, the O₂ purity resulting in the conventional process (i.e. distillation) is higher compared to the O₂ purity obtained in the membrane unit (e.g., 99.50% vs. 73.66%). The air flow-rate required for membrane separation is about three times higher compared to the air flow-rate for cryogenic distillation (e.g., 549,096.93 kg/h vs. 189,743.82 kg/h). A CO₂ capture rate of 93.97% was obtained in the membrane case, while the CO₂ capture rate for the gas-liquid absorption was 89.97%. A quantity of 6,626.49 kg/h CO₂ with a purity of 95.45% is separated from the total 23,352.83 kg/h flue-gas in the membrane process, while with absorption of 6,173.94 kg/h CO₂ with a purity of 98.79% is obtained from 21,902.04 kg/h flue-gas and 156,041.80 kg/h MDEA is recycled. The simulation results, performed using ChemCAD process simulator software, lead to the conclusion that membrane-based technology can be a suitable alternative for CO₂ removal for steel production. An environmental evaluation using Life Cycle Assessment (LCA) methodology was also performed. Considering the electricity consumption, the performance, and environmental indicators, Case 3 can be considered the most effective. The environmental evaluation, performed using GaBi software, shows that membrane technology can lead to lower environmental emissions if membrane production is based on benzene derived from toluene hydrodealkilation and chlorine and sodium hydroxide are produced using mixed technologies.Keywords: CO₂ capture, gas-liquid absorption, Life Cycle Assessment, membrane separation, steel production
Procedia PDF Downloads 291615 Properties and Microstructure of Scaled-Up MgO Concrete Blocks Incorporating Fly Ash or Ground Granulated Blast-Furnace Slag
Abstract:
MgO cements have the potential to sequester CO2 in construction products, and can be partial or complete replacement of PC in concrete. Construction block is a promising application for reactive MgO cements. Main advantages of blocks are: (i) suitability for sequestering CO2 due to their initially porous structure; (ii) lack of need for in-situ treatment as carbonation can take place during fabrication; and (iii) high potential for commercialization. Both strength gain and carbon sequestration of MgO cements depend on carbonation process. Fly ash and ground granulated blast-furnace slag (GGBS) are pozzolanic material and are proved to improve many of the performance characteristics of the concrete, such as strength, workability, permeability, durability and corrosion resistance. A very limited amount of work has been reported on the production of MgO blocks on a large scale so far. A much more extensive study, wherein blocks with different mix design is needed to verify the feasibility of commercial production. The changes in the performance of the samples were evaluated by compressive strength testing. The properties of the carbonation products were identified by X-ray diffraction (XRD) and scanning electron microscopy (SEM)/ field emission scanning electron microscopy (FESEM), and the degree of carbonation was obtained by thermogravimetric analysis (TGA), XRD and energy dispersive X-ray (EDX). The results of this study enabled the understanding the relationship between lab-scale samples and scale-up blocks based on their mechanical performance and microstructure. Results indicate that for both scaled-up and lab-scale samples, MgO samples always had the highest strength results, followed by MgO-fly ash samples and MgO-GGBS had relatively lowest strength. The lower strength of MgO with fly ash/GGBS samples at early stage is related to the relatively slow hydration process of pozzolanic materials. Lab-scale cubic samples were observed to have higher strength results than scaled-up samples. The large size of the scaled-up samples made it more difficult to let CO2 to reach inner part of the samples and less carbonation products formed. XRD, TGA and FESEM/EDX results indicate the existence of brucite and HMCs in MgO samples, M-S-H, hydrotalcite in the MgO-fly ash samples and C-S-H, hydrotalctie in the MgO-GGBS samples. Formation of hydration products (M-S-H, C-S-H, hydrotalcite) and carbonation products (hydromagnecite, dypingite) increased with curing duration, which is the reason of increasing strength. This study verifies the advantage of large-scale MgO blocks over common PC blocks and the feasibility of commercial production of MgO blocks.Keywords: reactive MgO, fly ash, ground granulated blast-furnace slag, carbonation, CO₂
Procedia PDF Downloads 192614 An Analysis of Emmanuel Macron's Campaign Discourse
Authors: Robin Turner
Abstract:
In the context of the strengthening conservative movements such as “Brexit” and the election of US President Donald Trump, the global political stage was shaken up by the election of Emmanuel Macron to the French presidency, defeating the far-right candidate Marine Le Pen. The election itself was a first for the Fifth Republic in which neither final candidate was from the traditional two major political parties: the left Parti Socialiste (PS) and the right Les Républicains (LR). Macron, who served as the Minister of Finance under his predecessor, founded the centrist liberal political party En Marche! in April 2016 before resigning from his post in August to launch his bid for the presidency. Between the time of the party’s creation to the first round of elections a year later, Emmanuel Macron and En Marche! had garnered enough support to make it to the run-off election, finishing far ahead of many seasoned national political figures. Now months into his presidency, the youngest President of the Republic shows no sign of losing fuel anytime soon. His unprecedented success raises a lot of questions with respect to international relations, economics, and the evolving relationship between the French government and its citizens. The effectiveness of Macron’s campaign, of course, relies on many factors, one of which is his manner of communicating his platform to French voters. Using data from oral discourse and primary material from Macron and En Marche! in sources such as party publications and Twitter, the study categorizes linguistic instruments – address, lexicon, tone, register, and syntax – to identify prevailing patterns of speech and communication. The linguistic analysis in this project is two-fold. In addition to these findings’ stand-alone value, these discourse patterns are contextualized by comparable discourse of other 2017 presidential candidates with high emphasis on that of Marine Le Pen. Secondly, to provide an alternative approach, the study contextualizes Macron’s discourse using those of two immediate predecessors representing the traditional stronghold political parties, François Hollande (PS) and Nicolas Sarkozy (LR). These comparative methods produce an analysis that gives insight to not only a contributing factor to Macron’s successful 2017 campaign but also provides insight into how Macron’s platform presents itself differently to previous presidential platforms. Furthermore, this study extends analysis to supply data that contributes to a wider analysis of the defeat of “traditional” French political parties by the “start-up” movement En Marche!.Keywords: Emmanuel Macron, French, discourse analysis, political discourse
Procedia PDF Downloads 261613 Modeling the Impact of Aquaculture in Wetland Ecosystems Using an Integrated Ecosystem Approach: Case Study of Setiu Wetlands, Malaysia
Authors: Roseliza Mat Alipiah, David Raffaelli, J. C. R. Smart
Abstract:
This research is a new approach as it integrates information from both environmental and social sciences to inform effective management of the wetlands. A three-stage research framework was developed for modelling the drivers and pressures imposed on the wetlands and their impacts to the ecosystem and the local communities. Firstly, a Bayesian Belief Network (BBN) was used to predict the probability of anthropogenic activities affecting the delivery of different key wetland ecosystem services under different management scenarios. Secondly, Choice Experiments (CEs) were used to quantify the relative preferences which key wetland stakeholder group (aquaculturists) held for delivery of different levels of these key ecosystem services. Thirdly, a Multi-Criteria Decision Analysis (MCDA) was applied to produce an ordinal ranking of the alternative management scenarios accounting for their impacts upon ecosystem service delivery as perceived through the preferences of the aquaculturists. This integrated ecosystem management approach was applied to a wetland ecosystem in Setiu, Terengganu, Malaysia which currently supports a significant level of aquaculture activities. This research has produced clear guidelines to inform policy makers considering alternative wetland management scenarios: Intensive Aquaculture, Conservation or Ecotourism, in addition to the Status Quo. The findings of this research are as follows: The BBN revealed that current aquaculture activity is likely to have significant impacts on water column nutrient enrichment, but trivial impacts on caged fish biomass, especially under the Intensive Aquaculture scenario. Secondly, the best fitting CE models identified several stakeholder sub-groups for aquaculturists, each with distinct sets of preferences for the delivery of key ecosystem services. Thirdly, the MCDA identified Conservation as the most desirable scenario overall based on ordinal ranking in the eyes of most of the stakeholder sub-groups. Ecotourism and Status Quo scenarios were the next most preferred and Intensive Aquaculture was the least desirable scenario. The methodologies developed through this research provide an opportunity for improving planning and decision making processes that aim to deliver sustainable management of wetland ecosystems in Malaysia.Keywords: Bayesian belief network (BBN), choice experiments (CE), multi-criteria decision analysis (MCDA), aquaculture
Procedia PDF Downloads 294612 Variable Mapping: From Bibliometrics to Implications
Authors: Przemysław Tomczyk, Dagmara Plata-Alf, Piotr Kwiatek
Abstract:
Literature review is indispensable in research. One of the key techniques used in it is bibliometric analysis, where one of the methods is science mapping. The classic approach that dominates today in this area consists of mapping areas, keywords, terms, authors, or citations. This approach is also used in relation to the review of literature in the field of marketing. The development of technology has resulted in the fact that researchers and practitioners use the capabilities of software available on the market for this purpose. The use of science mapping software tools (e.g., VOSviewer, SciMAT, Pajek) in recent publications involves the implementation of a literature review, and it is useful in areas with a relatively high number of publications. Despite this well-grounded science mapping approach having been applied in the literature reviews, performing them is a painstaking task, especially if authors would like to draw precise conclusions about the studied literature and uncover potential research gaps. The aim of this article is to identify to what extent a new approach to science mapping, variable mapping, takes advantage of the classic science mapping approach in terms of research problem formulation and content/thematic analysis for literature reviews. To perform the analysis, a set of 5 articles on customer ideation was chosen. Next, the analysis of key words mapping results in VOSviewer science mapping software was performed and compared with the variable map prepared manually on the same articles. Seven independent expert judges (management scientists on different levels of expertise) assessed the usability of both the stage of formulating, the research problem, and content/thematic analysis. The results show the advantage of variable mapping in the formulation of the research problem and thematic/content analysis. First, the ability to identify a research gap is clearly visible due to the transparent and comprehensive analysis of the relationships between the variables, not only keywords. Second, the analysis of relationships between variables enables the creation of a story with an indication of the directions of relationships between variables. Demonstrating the advantage of the new approach over the classic one may be a significant step towards developing a new approach to the synthesis of literature and its reviews. Variable mapping seems to allow scientists to build clear and effective models presenting the scientific achievements of a chosen research area in one simple map. Additionally, the development of the software enabling the automation of the variable mapping process on large data sets may be a breakthrough change in the field of conducting literature research.Keywords: bibliometrics, literature review, science mapping, variable mapping
Procedia PDF Downloads 120611 Assessing the Social Impacts of a Circular Economy in the Global South
Authors: Dolores Sucozhañay, Gustavo Pacheco, Paul Vanegas
Abstract:
In the context of sustainable development and the transition towards a sustainable circular economy (CE), evaluating the social dimension remains a challenge. Therefore, developing a respective methodology is highly important. First, the change of the economic model may cause significant social effects, which today remain unaddressed. Second, following the current level of globalization, CE implementation requires targeting global material cycles and causes social impacts on potentially vulnerable social groups. A promising methodology is the Social Life Cycle Assessment (SLCA), which embraces the philosophy of life cycle thinking and provides complementary information to environmental and economic assessments. In this context, the present work uses the updated Social Life Cycle Assessment (SLCA) Guidelines 2020 to assess the social performance of the recycling system of Cuenca, Ecuador, to exemplify a social assessment method. Like many other developing countries, Ecuador heavily depends on the work of informal waste pickers (recyclers), who, even contributing to a CE, face harsh socio-economic circumstances, including inappropriate working conditions, social exclusion, exploitation, etc. Under a Reference Scale approach (Type 1), 12 impact subcategories were assessed through 73 site-specific inventory indicators, using an ascending reference scale ranging from -2 to +2. Findings reveal a social performance below compliance levels with local and international laws, basic societal expectations, and practices in the recycling sector; only eight and five indicators present a positive score. In addition, a social hotspot analysis depicts collection as the most time-consuming lifecycle stage and the one with the most hotspots, mainly related to working hours and health and safety aspects. This study provides an integrated view of the recyclers’ contributions, challenges, and opportunities within the recycling system while highlighting the relevance of assessing the social dimension of CE practices. It also fosters an understanding of the social impact of CE operations in developing countries, highlights the need for a close north-south relationship in CE, and enables the connection among the environmental, economic, and social dimensions.Keywords: SLCA, circular economy, recycling, social impact assessment
Procedia PDF Downloads 151610 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India
Authors: Disha Bhanot, Vinish Kathuria
Abstract:
This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.Keywords: distress sale, horticulture, income loss, India, price uncertainity
Procedia PDF Downloads 243609 Unraveling Language Dynamics: A Case Study of Language in Education in Pakistan
Authors: Naseer Ahmad
Abstract:
This research investigates the intricate dynamics of language policy, ideology, and the choice of educational language as a medium of instruction in rural Pakistan. Focused on addressing the complexities of language practices in underexplored educational contexts, the study employed a case study approach, analyzing interviews with education authorities, teachers, and students, alongside classroom observations in English-medium and Urdu-medium rural schools. The research underscores the significance of understanding linguistic diversity within rural communities. The analysis of interviews and classroom observations revealed that language policies in rural schools are influenced by multiple factors, including historical legacies, societal language ideologies, and government directives. The dominance of Urdu and English as the preferred languages of instruction reflected a broader language hierarchy, where regional languages are often marginalized. This language ideology perpetuates a sense of linguistic inferiority among students who primarily speak regional languages. The impact of language choices on students' learning experiences and outcomes is a central focus of the research. It became evident that while policies advocate for specific language practices, the implementation often diverges due to multifarious socio-cultural, economic, and institutional factors. This disparity significantly impacts the effectiveness of educational processes, influencing pedagogical approaches, student engagement, academic outcomes, social mobility, and language choices. Based on the findings, the study concluded that due to policy and practice gap, rural people have complex perceptions and language choices. They perceived Urdu as a national, lingua franca, cultural, easy, or low-status language. They perceived English as an international, lingua franca, modern, difficult, or high-status language. They perceived other languages as mother tongue, local, religious, or irrelevant languages. This research provided insights that are crucial for theory, policy, and practice, addressing educational inequities and inclusive language policies. It set the stage for further research and advocacy efforts in the realm of language policies in diverse educational settings.Keywords: language-in-education policy, language ideology, educational language choice, pakistan
Procedia PDF Downloads 71608 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection
Authors: Ali Hamza
Abstract:
Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network
Procedia PDF Downloads 84607 Maturity Level of Knowledge Management in Whole Life Costing in the UK Construction Industry: An Empirical Study
Authors: Ndibarefinia Tobin
Abstract:
The UK construction industry has been under pressure for many years to produce economical buildings which offer value for money, not only during the construction phase, but more importantly, during the full life of the building. Whole life costing is considered as an economic analysis tool that takes into account the total investment cost in and ownership, operation and subsequent disposal of a product or system to which the whole life costing method is being applied. In spite of its importance, the practice is still crippled by the lack of tangible evidence, ‘know-how’ skills and knowledge of the practice i.e. the lack of professionals with the knowledge and training on the use of the practice in construction project, this situation is compounded by the absence of available data on whole life costing from relevant projects, lack of data collection mechanisms and so on. The aforementioned problems has forced many construction organisations to adopt project enhancement initiatives to boost their performance on the use of whole life costing techniques so as to produce economical buildings which offer value for money during the construction stage also the whole life of the building/asset. The management of knowledge in whole life costing is considered as one of the many project enhancement initiative and it is becoming imperative in the performance and sustainability of an organisation. Procuring building projects using whole life costing technique is heavily reliant on the knowledge, experience, ideas and skills of workers, which comes from many sources including other individuals, electronic media and documents. Due to the diversity of knowledge, capabilities and skills of employees that vary across an organisation, it is significant that they are directed and coordinated efficiently so as to capture, retrieve and share knowledge in order to improve the performance of the organisation. The implementation of knowledge management concept has different levels in each organisation. Measuring the maturity level of knowledge management in whole life costing practice will paint a comprehensible picture of how knowledge is managed in construction organisations. Purpose: The purpose of this study is to identify knowledge management maturity in UK construction organisations adopting whole life costing in construction project. Design/methodology/approach: This study adopted a survey method and conducted by distributing questionnaires to large construction companies that implement knowledge management activities in whole life costing practice in construction project. Four level of knowledge management maturity was proposed on this study. Findings: From the results obtained in the study shows that 34 contractors at the practiced level, 26 contractors at managed level and 12 contractors at continuously improved level.Keywords: knowledge management, whole life costing, construction industry, knowledge
Procedia PDF Downloads 244606 Doing Durable Organisational Identity Work in the Transforming World of Work: Meeting the Challenge of Different Workplace Strategies
Authors: Theo Heyns Veldsman, Dieter Veldsman
Abstract:
Organisational Identity (OI) refers to who and what the organisation is, what it stands for and does, and what it aspires to become. OI explores the perspectives of how we see ourselves, are seen by others and aspire to be seen. It provides as rationale the ‘why’ for the organisation’s continued existence. The most widely accepted differentiating features of OI are encapsulated in the organisation’s core, distinctive, differentiating, and enduring attributes. OI finds its concrete expression in the organisation’s Purpose, Vision, Strategy, Core Ideology, and Legacy. In the emerging new order infused by hyper-turbulence and hyper-fluidity, the VICCAS world, OI provides a secure anchor and steady reference point for the organisation, particularly the growing widespread focus on Purpose, which is indicative of the organisation’s sense of social citizenship. However, the transforming world of work (TWOW) - particularly the potent mix of ongoing disruptive innovation, the 4th Industrial Revolution, and the gig economy with the totally unpredicted COVID19 pandemic - has resulted in the consequential adoption of different workplace strategies by organisations in terms of how, where, and when work takes place. Different employment relations (transient to permanent); work locations (on-site to remote); work time arrangements (full-time at work to flexible work schedules); and technology enablement (face-to-face to virtual) now form the basis of the employer/employee relationship. The different workplace strategies, fueled by the demands of TWOW, pose a substantive challenge to organisations of doing durable OI work, able to fulfill OI’s critical attributes of core, distinctive, differentiating, and enduring. OI work is contained in the ongoing, reciprocally interdependent stages of sense-breaking, sense-giving, internalisation, enactment, and affirmation. The objective of our paper is to explore how to do durable OI work relative to different workplace strategies in the TWOW. Using a conceptual-theoretical approach from a practice-based orientation, the paper addresses the following topics: distinguishes different workplace strategies based upon a time/place continuum; explicates stage-wise the differential organisational content and process consequences of these strategies for durable OI work; indicates the critical success factors of durable OI work under these differential conditions; recommends guidelines for OI work relative to TWOW; and points out ethical implications of all of the above.Keywords: organisational identity, workplace strategies, new world of work, durable organisational identity work
Procedia PDF Downloads 200605 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English
Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista
Abstract:
The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.Keywords: corpus linguistics, historical linguistics, old English, parallel corpus
Procedia PDF Downloads 212604 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators
Authors: Guenther Schuh, Michael Riesener, Frederic Diels
Abstract:
Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.Keywords: agile, highly iterative development, agile-indicator, product development
Procedia PDF Downloads 246603 Determining the Effective Substance of Cottonseed Extract on the Treatment of Leishmaniasis
Authors: Mehrosadat Mirmohammadi, Sara Taghdisi, Ali Padash, Mohammad Hossein Pazandeh
Abstract:
Gossypol, a yellowish anti-nutritional compound found in cotton plants, exists in various plant parts, including seeds, husks, leaves, and stems. Chemically, gossypol is a potent polyphenolic aldehyde with antioxidant and therapeutic properties. However, its free form can be toxic, posing risks to both humans and animals. Initially, we extracted gossypol from cotton seeds using n-hexane as a solvent (yield: 84.0 ± 4.0%). We also obtained cotton seed and cotton boll extracts via Soxhlet extraction (25:75 hydroalcoholic ratio). These extracts, combined with cornstarch, formed four herbal medicinal formulations. Ethical approval allowed us to investigate their effects on Leishmania-caused skin wounds, comparing them to glucantime (local ampoule). Herbal formulas outperformed the control group (ethanol only) in wound treatment (p-value 0.05). The average wound diameter after two months did not significantly differ between plant extract ointments and topical glucantime. Notably, cotton boll extract with 1% extra gossypol crystal showed the best therapeutic effect. We extracted gossypol from cotton seeds using n-hexane via Soxhlet extraction. Saponification, acidification, and recrystallization steps followed. FTIR, UV-Vis, and HPLC analyses confirmed the product’s identity. Herbal medicines from cotton seeds effectively treated chronic wounds compared to the ethanol-only control group. Wound diameter differed significantly between extract ointments and glucantime injections. It seems that due to the presence of large amounts of fat in the oil, the extraction of gossypol from it faces many obstacles. The extraction of this compound with our technique showed that extraction from oil has a higher efficiency, perhaps because of the preparation of oil by cold pressing method, the possibility of losing this compound is much less than when extraction is done with Soxhlet. On the other hand, the gossypol in the oil is mostly bound to the protein, which somehow protects the gossypol until the last stage of the extraction process. Since this compound is very sensitive to light and heat, it was extracted as a derivative with acetic acid. Also, in the treatment section, it was found that the ointment prepared with the extract is more effective and Gossypol is one of the effective ingredients in the treatment. Therefore, gossypol can be extracted from the oil and added to the extract from which gossypol has been extracted to make an effective medicine with a certain dose.Keywords: cottonseed, glucantime, gossypol, leishmaniasis
Procedia PDF Downloads 61602 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis
Authors: Coriolano Salvini, Ambra Giovannelli
Abstract:
The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.
Procedia PDF Downloads 228601 Arc Interruption Design for DC High Current/Low SC Fuses via Simulation
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
This report summarizes a simulation-based approach to estimate the current interruption behavior of a fuse element utilized in a DC network protecting battery banks under different stresses. Due to internal resistance of the battries, the short circuit current in very close to the nominal current, and it makes the fuse designation tricky. The base configuration considered in this report consists of five fuse units in parallel. The simulations are performed using a multi-physics software package, COMSOL® 5.6, and the necessary material parameters have been calculated using two other software packages.The first phase of the simulation starts with the heating of the fuse elements resulted from the current flow through the fusing element. In this phase, the heat transfer between the metallic strip and the adjacent materials results in melting and evaporation of the filler and housing before the aluminum strip is evaporated and the current flow in the evaporated strip is cut-off, or an arc is eventually initiated. The initiated arc starts to expand, so the entire metallic strip is ablated, and a long arc of around 20 mm is created within the first 3 milliseconds after arc initiation (v_elongation = 6.6 m/s. The final stage of the simulation is related to the arc simulation and its interaction with the external circuitry. Because of the strong ablation of the filler material and venting of the arc caused by the melting and evaporation of the filler and housing before an arc initiates, the arc is assumed to burn in almost pure ablated material. To be able to precisely model this arc, one more step related to the derivation of the transport coefficients of the plasma in ablated urethane was necessary. The results indicate that an arc current interruption, in this case, will not be achieved within the first tens of milliseconds. In a further study, considering two series elements, the arc was interrupted within few milliseconds. A very important aspect in this context is the potential impact of many broken strips parallel to the one where the arc occurs. The generated arcing voltage is also applied to the other broken strips connected in parallel with arcing path. As the gap between the other strips is very small, a large voltage of a few hundred volts generated during the current interruption may eventually lead to a breakdown of another gap. As two arcs in parallel are not stable, one of the arcs will extinguish, and the total current will be carried by one single arc again. This process may be repeated several times if the generated voltage is very large. The ultimate result would be that the current interruption may be delayed.Keywords: DC network, high current / low SC fuses, FEM simulation, paralle fuses
Procedia PDF Downloads 66600 Teaching Behaviours of Effective Secondary Mathematics Teachers: A Study in Dhaka, Bangladesh
Authors: Asadullah Sheikh, Kerry Barnett, Paul Ayres
Abstract:
Despite significant progress in access, equity and public examination success, poor student performance in mathematics in secondary schools has become a major concern in Bangladesh. A substantial body of research has emphasised the important contribution of teaching practices to student achievement. However, this has not been investigated in Bangladesh. Therefore, the study sought to find out the effectiveness of mathematics teaching practices as a means of improving secondary school mathematics in Dhaka Municipality City (DMC) area, Bangladesh. The purpose of this study was twofold, first, to identify the 20 highest performing secondary schools in mathematics in DMC, and second, to investigate the teaching practices of mathematics teachers in these schools. A two-phase mixed method approach was adopted. In the first phase, secondary source data were obtained from the Board of Intermediate and Secondary Education (BISE), Dhaka and value-added measures used to identify the 20 highest performing secondary schools in mathematics. In the second phase, a concurrent mixed method design, where qualitative methods were embedded within a dominant quantitative approach was utilised. A purposive sampling strategy was used to select fifteen teachers from the 20 highest performing secondary schools. The main sources of data were classroom teaching observations, and teacher interviews. The data from teacher observations were analysed with descriptive and nonparametric statistics. The interview data were analysed qualitatively. The main findings showed teachers adopt a direct teaching approach which incorporates orientation, structuring, modelling, practice, questioning and teacher-student interaction that creates an individualistic learning environment. The variation in developmental levels of teaching skill indicate that teachers do not necessarily use the qualitative (i.e., focus, stage, quality and differentiation) aspects of teaching behaviours effectively. This is the first study to investigate teaching behaviours of effective secondary mathematics teachers within Dhaka, Bangladesh. It contributes in an international dimension to the field of educational effectiveness and raise questions about existing constructivist approaches. Further, it contributes to important insights about teaching behaviours that can be used to inform the development of evidence-based policy and practice on quality teaching in Bangladesh.Keywords: effective teaching, mathematics, secondary schools, student achievement, value-added measures
Procedia PDF Downloads 238599 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever
Abstract:
Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.Keywords: deep learning model, dengue fever, prediction, optimization
Procedia PDF Downloads 65598 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes
Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi
Abstract:
Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation
Procedia PDF Downloads 292597 Performance of HVOF Sprayed Ni-20CR and Cr3C2-NiCr Coatings on Fe-Based Superalloy in an Actual Industrial Environment of a Coal Fired Boiler
Authors: Tejinder Singh Sidhu
Abstract:
Hot corrosion has been recognized as a severe problem in steam-powered electricity generation plants and industrial waste incinerators as it consumes the material at an unpredictably rapid rate. Consequently, the load-carrying ability of the components reduces quickly, eventually leading to catastrophic failure. The inability to either totally prevent hot corrosion or at least detect it at an early stage has resulted in several accidents, leading to loss of life and/or destruction of infrastructures. A number of countermeasures are currently in use or under investigation to combat hot corrosion, such as using inhibitors, controlling the process parameters, designing a suitable industrial alloy, and depositing protective coatings. However, the protection system to be selected for a particular application must be practical, reliable, and economically viable. Due to the continuously rising cost of the materials as well as increased material requirements, the coating techniques have been given much more importance in recent times. Coatings can add value to products up to 10 times the cost of the coating. Among the different coating techniques, thermal spraying has grown into a well-accepted industrial technology for applying overlay coatings onto the surfaces of engineering components to allow them to function under extreme conditions of wear, erosion-corrosion, high-temperature oxidation, and hot corrosion. In this study, the hot corrosion performances of Ni-20Cr and Cr₃C₂-NiCr coatings developed by High Velocity Oxy-Fuel (HVOF) process have been studied. The coatings were developed on a Fe-based superalloy, and experiments were performed in an actual industrial environment of a coal-fired boiler. The cyclic study was carried out around the platen superheater zone where the temperature was around 1000°C. The study was conducted for 10 cycles, and one cycle was consisting of 100 hours of heating followed by 1 hour of cooling at ambient temperature. Both the coatings deposited on Fe-based superalloy imparted better hot corrosion resistance than the uncoated one. The Ni-20Cr coated superalloy performed better than the Cr₃C₂-NiCr coated in the actual working conditions of the coal fired boiler. It is found that the formation of chromium oxide at the boundaries of Ni-rich splats of the coating blocks the inward permeation of oxygen and other corrosive species to the substrate.Keywords: hot corrosion, coating, HVOF, oxidation
Procedia PDF Downloads 83596 Development of Oral Biphasic Drug Delivery System Using a Natural Resourced Polymer, Terminalia catappa
Authors: Venkata Srikanth Meka, Nur Arthirah Binti Ahmad Tarmizi Tan, Muhammad Syahmi Bin Md Nazir, Adinarayana Gorajana, Senthil Rajan Dharmalingam
Abstract:
Biphasic drug delivery systems are designed to release drug at two different rates, either fast/prolonged or prolonged/fast. A fast/prolonged release system provides a burst drug release at initial stage followed by a slow release over a prolonged period of time and in case of prolonged/fast release system, the release pattern is vice versa. Terminalia catappa gum (TCG) is a natural polymer and was successfully proven as a novel pharmaceutical excipient. The main objective of the present research is to investigate the applicability of natural polymer, Terminalia catappa gum in the design of oral biphasic drug delivery system in the form of mini tablets by using a model drug, buspirone HCl. This investigation aims to produce a biphasic release drug delivery system of buspirone by combining immediate release and prolonged release mini tablets into a capsule. For immediate release mini tablets, a dose of 4.5 mg buspirone was prepared by varying the concentration of superdisintegrant; crospovidone. On the other hand, prolonged release mini tablets were produced by using different concentrations of the natural polymer; TCG with a buspirone dose of 3mg. All mini tablets were characterized for weight variation, hardness, friability, disintegration, content uniformity and dissolution studies. The optimized formulations of immediate and prolonged release mini tablets were finally combined in a capsule and was evaluated for release studies. FTIR and DSC studies were conducted to study the drug-polymer interaction. All formulations of immediate release and prolonged release mini tablets were passed all the in-process quality control tests according to US Pharmacopoeia. The disintegration time of immediate release mini tablets of different formulations was varied from 2-6 min, and maximum drug release was achieved in lesser than 60 min. Whereas prolonged release mini tablets made with TCG have shown good drug retarding properties. Formulations were controlled for about 4-10 hrs with varying concentration of TCG. As the concentration of TCG increased, the drug release retarding property also increased. The optimised mini tablets were packed in capsules and were evaluated for the release mechanism. The capsule dosage form has clearly exhibited the biphasic release of buspirone, indicating that TCG is a suitable natural polymer for this study. FTIR and DSC studies proved that there was no interaction between the drug and polymer. Based on the above positive results, it can be concluded that TCG is a suitable polymer for the biphasic drug delivery systems.Keywords: Terminalia catappa gum, biphasic release, mini tablets, tablet in capsule, natural polymers
Procedia PDF Downloads 393595 Characterization of Volatiles Botrytis cinerea in Blueberry Using Solid Phase Micro Extraction, Gas Chromatography Mass Spectrometry
Authors: Ahmed Auda, Manjree Agarwala, Giles Hardya, Yonglin Rena
Abstract:
Botrytis cinerea is a major pest for many plants. It can attack a wide range of plant parts. It can attack buds, flowers, and leaves, stems, and fruit. However, B. cinerea can be mixed with other diseases that cause the same damage. There are many species of botrytis and more than one different strains of each. Botrytis might infect the foliage of nursery stock stored through winter in damp conditions. There are no known resistant plants. Botrytis must have nutrients or food source before it infests the plant. Nutrients leaking from wounded plant parts or dying tissue like old flower petals give the required nutrients. From this food, the fungus becomes more attackers and invades healthy tissue. Dark to light brown rot forms in the ill tissue. High humidity conditions support the growth of this fungus. However, we suppose that selection pressure can act on the morphological and neurophysiologic filter properties of the receiver and on both the biochemical and the physiological regulation of the signal. Communication is implied when signal and receiver evolves toward more and more specific matching, culminating. In other hand, receivers respond to portions of a body odor bouquet which is released to the environment not as an (intentional) signal but as an unavoidable consequence of metabolic activity or tissue damage. Each year Botrytis species can cause considerable economic losses to plant crops. Even with the application of strict quarantine and control measures, these fungi can still find their way into crops and cause the imposition of onerous restrictions on exports. Blueberry fruit mould caused by a fungal infection usually results in major losses during post-harvest storage. Therefore, the management of infection in early stages of disease development is necessary to minimize losses. The overall purpose of this study will develop sensitive, cheap, quick and robust diagnostic techniques for the detection of B. cinerea in blueberry. The specific aim was designed to investigate the performance of volatile organic compounds (VOCs) in the detection and discrimination of blueberry fruits infected by fungal pathogens with an emphasis on Botrytis in the early storage stage of post-harvest.Keywords: botrytis cinerea, blueberry, GC/MS, VOCs
Procedia PDF Downloads 241594 Risk Assessment on New Bio-Composite Materials Made from Water Resource Recovery
Authors: Arianna Nativio, Zoran Kapelan, Jan Peter van der Hoek
Abstract:
Bio-composite materials are becoming increasingly popular in various applications, such as the automotive industry. Usually, bio-composite materials are made from natural resources recovered from plants, now, a new type of bio-composite material has begun to be produced in the Netherlands. This material is made from resources recovered from drinking water treatments (calcite), wastewater treatment (cellulose), and material from surface water management (aquatic plants). Surface water, raw drinking water, and wastewater can be contaminated with pathogens and chemical compounds. Therefore, it would be valuable to develop a framework to assess, monitor, and control the potential risks. Indeed, the goal is to define the major risks in terms of human health, quality of materials, and environment associated with the production and application of these new materials. This study describes the general risk assessment framework, starting with a qualitative risk assessment. The qualitative risk analysis was carried out by using the HAZOP methodology for the hazard identification phase. The HAZOP methodology is logical and structured and able to identify the hazards in the first stage of the design when hazards and associated risks are not well known. The identified hazards were analyzed to define the potential associated risks, and then these were evaluated by using the qualitative Event Tree Analysis. ETA is a logical methodology used to define the consequences for a specific hazardous incidents, evaluating the failure modes of safety barriers and dangerous intermediate events that lead to the final scenario (risk). This paper shows the effectiveness of combining of HAZOP and qualitative ETA methodologies for hazard identification and risk mapping. Then, key risks were identified, and a quantitative framework was developed based on the type of risks identified, such as QMRA and QCRA. These two models were applied to assess human health risks due to the presence of pathogens and chemical compounds such as heavy metals into the bio-composite materials. Thus, due to these contaminations, the bio-composite product, during its application, might release toxic substances into the environment leading to a negative environmental impact. Therefore, leaching tests are going to be planned to simulate the application of these materials into the environment and evaluate the potential leaching of inorganic substances, assessing environmental risk.Keywords: bio-composite, risk assessment, water reuse, resource recovery
Procedia PDF Downloads 109593 Reconceptualizing “Best Practices” in Public Sector
Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani
Abstract:
Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.Keywords: benchmarking, action research, critical realism, best practices, public sector
Procedia PDF Downloads 127592 Treatment of Papillary Thyroid Carcinoma Metastasis to the Sternum: A Case Report
Authors: Geliashvili T. M., Tyulyandina A. S., Valiev A. K., Kononets P. V., Kharatishvili T. K., Salkov A. G., Pronin A. I., Gadzhieva E. H., Parnas A. V., Ilyakov V. S.
Abstract:
Aim/Introduction: Metastasis (Mts) to the sternum, while extremely rare in differentiated thyroid cancer (DTC) (1), requires a personalized, multidisciplinary treatment approach. In aggressively growing Mts to the sternum, which rapidly become unresectable, a comprehensive therapeutic and diagnostic approach is particularly important. Materials and methods: We present a clinical case of solitary Mts to the sternum as first manifestation of a papillary thyroid microcarcinoma in a 55-year-old man. Results: 18F-FDG PET/CT after thyroidectomy confirmed the solitary Mts to the sternum with extremely high FDG uptake (SUVmax=71,1), which predicted its radioiodine-refractory (RIR). Due to close attachment to the mediastinum and rapid growth, Mts was considered unresectable. During the next three months, the patient received targeted therapy with the tyrosine kinase inhibitor (TKI) Lenvatinib 24 mg per day. 1st course of radioiodine therapy (RIT) 6 GBq was also performed, the results of which confirmed the RIR of the tumor process. As a result of systemic therapy (targeted therapy combined with RIT and suppressive hormone therapy with L-thyroxine), there was a significant biochemical response (decrease of serum thyroglobulin level from 50,000 ng/ml to 550 ng/ml) and a partial response with decrease of tumor size (from 80x69x123 mm to 65x50x112 mm) and decrease of FDG accumulation (SUVmax from 71.1 to 63). All of this made possible to perform surgical treatment of Mts - sternal extirpation with its replacement by an individual titanium implant. At the control examination, the stimulated thyroglobulin level was only 134 ng/ml, and PET/CT revealed postoperative areas of 18F-FDG metabolism in the removed sternal Mts. Also, 18F-FDG PET/CT in the early (metabolic) stage revealed two new bone Mts (in the area of L3 SUVmax=17,32 and right iliac bone SUVmax=13,73), which, as well as the removed sternal Mts, appeared to be RIRs at the 2nd course of RIT 6 GBq. Subsequently, on 02.2022, external beam radiation therapy (EBRT) was performed on the newly identified oligometastatic bone foci. At present, the patient is under dynamic monitoring and in the process of suppressive hormone therapy with L-thyroxine. Conclusion: Thus, only due to the early prescription of targeted TKI therapy was it possible to perform surgical resection of Mts to the sternum, thereby improve the patient's quality of life and preserve the possibility of radical treatment in case of oligometastatic disease progression.Keywords: differentiated thyroid cancer, metastasis to the sternum, radioiodine therapy, radioiodine-refractory cancer, targeted therapy, lenvatinib
Procedia PDF Downloads 105591 Review of the Safety of Discharge on the First Postoperative Day Following Carotid Surgery: A Retrospective Analysis
Authors: John Yahng, Hansraj Riteesh Bookun
Abstract:
Objective: This was a retrospective cross-sectional study evaluating the safety of discharge on the first postoperative day following carotid surgery - principally carotid endarterectomy. Methods: Between January 2010 to October 2017, 252 patients with mean age of 72 years, underwent carotid surgery by seven surgeons. Their medical records were consulted and their operative as well as complication timelines were databased. Descriptive statistics were used to analyse pooled responses and our indicator variables. The statistical package used was STATA 13. Results: There were 183 males (73%) and the comorbid burden was as follows: ischaemic heart disease (54%), diabetes (38%), hypertension (92%), stage 4 kidney impairment (5%) and current or ex-smoking (77%). The main indications were transient ischaemic attacks (42%), stroke (31%), asymptomatic carotid disease (16%) and amaurosis fugax (8%). 247 carotid endarterectomies (109 with patch arterioplasty, 88 with eversion and transection technique, 50 with endarterectomy only) were performed. 2 carotid bypasses, 1 embolectomy, 1 thrombectomy with patch arterioplasty and 1 excision of a carotid body tumour were also performed. 92% of the cases were performed under general anaesthesia. A shunt was used in 29% of cases. The mean length of stay was 5.1 ± 3.7days with the range of 2 to 22 days. No patient was discharged on day 1. The mean time from admission to surgery was 1.4 ± 2.8 days, ranging from 0 to 19 days. The mean time from surgery to discharge was 2.7 ± 2.0 days with the of range 0 to 14 days. 36 complications were encountered over this period, with 12 failed repairs (5 major strokes, 2 minor strokes, 3 transient ischaemic attacks, 1 cerebral bleed, 1 occluded graft), 11 bleeding episodes requiring a return to the operating theatre, 5 adverse cardiac events, 3 cranial nerve injuries, 2 respiratory complications, 2 wound complications and 1 acute kidney injury. There were no deaths. 17 complications occurred on postoperative day 0, 11 on postoperative day 1, 6 on postoperative day 2 and 2 on postoperative day 3. 78% of all complications happened before the second postoperative day. Out of the complications which occurred on the second or third postoperative day, 4 (1.6%) were bleeding episodes, 1 (0.4%) failed repair , 1 respiratory complication (0.4%) and 1 wound complication (0.4%). Conclusion: Although it has been common practice to discharge patients on the second postoperative day following carotid endarterectomy, we find here that discharge on the first operative day is safe. The overall complication rate is low and most complications are captured before the second postoperative day. We suggest that patients having an uneventful first 24 hours post surgery be discharged on the first day. This should reduce hospital length of stay and the health economic burden.Keywords: carotid, complication, discharge, surgery
Procedia PDF Downloads 166590 Policy Implications of Cashless Banking on Nigeria’s Economy
Authors: Oluwabiyi Adeola Ayodele
Abstract:
This study analysed the Policy and general issues that have arisen over time in Nigeria’ Cashless banking environment as a result of the lack of a Legal framework on Electronic banking in Nigeria. It undertook an in-depth study of the cashless banking system. It discussed the evolution, growth and development of cashless banking in Nigeria; It revealed the expected benefits of the cashless banking system; It appraised regulatory issues and other prevalent problems on cashless banking in Nigeria; and made appropriate recommendations where necessary. The study relied on primary and secondary sources of information. The primary sources included the Constitution of the Federal Republic of Nigeria, Statutes, Conventions and Judicial decisions, while the secondary sources included Books, Journals Articles, Newspapers and Internet Materials. The study revealed that cashless banking has been adopted in Nigeria but still at the developing stage. It revealed that there is no law for the regulation of cashless banking in Nigeria, what Nigeria relies on for regulation is the Central Bank of Nigeria’s Cashless Policy, 2014. The Banks and Other Financial Institutions Act Chapter B3, LFN, 2004 of Nigeria lack provision to accommodate issues on Internet banking. However, under the general principles of legality in criminal law, and by the provisions of the Nigerian Constitution, a person can only be punished for conducts that have been defined to be criminal by written laws with the penalties specifically stated in the law. Although Nigeria has potent laws for the regulation of paper banking, these laws cannot be substituted for paperless transactions. This is because the issues involved in both transactions vary. The study also revealed that the absence of law in the cashless banking environment in Nigeria will subject consumers to endless risks. This study revealed that the creation of banking markets via the Internet relies on both available technologies and appropriate laws and regulations. It revealed however that Law of some of the countries considered on cashless banking has taken care of most of the legal issues and other problems prevalent in the cashless banking environment. The study also revealed some other problems prevalent in the Nigerian cashless banking environment. The study concluded that for Nigeria to find solutions to the legal issues raised in its cashless banking environment and other problems of cashless banking, it should have a viable legal Frame work for internet banking. The study concluded that the Central Bank of Nigeria’s Policy on Cashless banking is not potent enough to tackle the challenges posed to cashless banking in Nigeria because policies only have a persuasive effect and not a binding effect. There is, therefore, a need for appropriate Laws for the regulation of cashless Banking in Nigeria. The study also concluded that there is a need to create more awareness of the system among Nigerians and solve infrastructural problems like prevalent power outage which often have been creating internet network problem.Keywords: cashless-banking, Nigeria, policies, laws
Procedia PDF Downloads 489589 Study on the Post-Traumatic Stress Disorder and Its Psycho-Social-Genetic Risk Factors among Tibetan Alolescents in Heavily-Hit Area Three Years after Yushu Earthquake in Qinghai Province, China
Authors: Xiaolian Jiang, Dongling Liu, Kun Liu
Abstract:
Aims: To examine the prevalence of POST-TRAUMATIC STRESS DISORDER (PTSD) symptoms among Tibetan adolescents in heavily-hit disaster area three years after Yushu earthquake, and to explore the interactions of the psycho-social-genetic risk factors. Methods: This was a three-stage study. Firstly, demographic variables,PTSD Checklist-Civilian Version (PCL-C),the Internality、Powerful other、Chance Scale,(IPC),Coping Style Scale(CSS),and the Social Support Appraisal(SSA)were used to explore the psychosocial factors of PTSD symptoms among adolescent survivors. PCL-C was used to examine the PTSD symptoms among 4072 Tibetan adolescents,and the Structured Clinical Interview for DSM-IV Disorders(SCID)was used by psychiatrists to make the diagnosis precisely. Secondly,a case-control trial was used to explore the relationship between PTSD and gene polymorphisms. 287adolescents diagnosed with PTSD were recruited in study group, and 280 adolescents without PTSD in control group. Polymerase chain reaction-restriction fragment length polymorphism technology(PCR-RFLP)was used to test gene polymorphisms. Thirdly,SPSS 22.0 was used to explore the interactions of the psycho-social-genetic risk factors of PTSD on the basis of the above results. Results and conclusions: 1.The prevalence of PTSD was 9.70%. 2.The predictive psychosocial factors of PTSD included earthquake exposure, support from others, imagine, abreact, tolerant, powerful others and family support. 3.Synergistic interactions between A1 gene of DRD2 TaqIA and the external locus of control, negative coping style, severe earthquake exposure were found. Antagonism interactions between A1 gene of DRD2 TaqIA and poor social support was found. Synergistic interactions between A1/A1 genotype and the external locus of control, negative coping style were found. Synergistic interactions between 12 gene of 5-HTTVNTR and the external locus of control, negative coping style, severe earthquake exposure were found. Synergistic interactions between 12/12 genotype and the external locus of control, negative coping style, severe earthquake exposure were also found.Keywords: adolescents, earthquake, PTSD, risk factors
Procedia PDF Downloads 152588 Switching of Series-Parallel Connected Modules in an Array for Partially Shaded Conditions in a Pollution Intensive Area Using High Powered MOSFETs
Authors: Osamede Asowata, Christo Pienaar, Johan Bekker
Abstract:
Photovoltaic (PV) modules may become a trend for future PV systems because of their greater flexibility in distributed system expansion, easier installation due to their nature, and higher system-level energy harnessing capabilities under shaded or PV manufacturing mismatch conditions. This is as compared to the single or multi-string inverters. Novel residential scale PV arrays are commonly connected to the grid by a single DC–AC inverter connected to a series, parallel or series-parallel string of PV panels, or many small DC–AC inverters which connect one or two panels directly to the AC grid. With an increasing worldwide interest in sustainable energy production and use, there is renewed focus on the power electronic converter interface for DC energy sources. Three specific examples of such DC energy sources that will have a role in distributed generation and sustainable energy systems are the photovoltaic (PV) panel, the fuel cell stack, and batteries of various chemistries. A high-efficiency inverter using Metal Oxide Semiconductor Field-Effect Transistors (MOSFETs) for all active switches is presented for a non-isolated photovoltaic and AC-module applications. The proposed configuration features a high efficiency over a wide load range, low ground leakage current and low-output AC-current distortion with no need for split capacitors. The detailed power stage operating principles, pulse width modulation scheme, multilevel bootstrap power supply, and integrated gate drivers for the proposed inverter is described. Experimental results of a hardware prototype, show that not only are MOSFET efficient in the system, it also shows that the ground leakage current issues are alleviated in the proposed inverter and also a 98 % maximum associated driver circuit is achieved. This, in turn, provides the need for a possible photovoltaic panel switching technique. This will help to reduce the effect of cloud movements as well as improve the overall efficiency of the system.Keywords: grid connected photovoltaic (PV), Matlab efficiency simulation, maximum power point tracking (MPPT), module integrated converters (MICs), multilevel converter, series connected converter
Procedia PDF Downloads 127587 Anthropometric Indices of Obesity and Coronary Artery Atherosclerosis: An Autopsy Study in South Indian population
Authors: Francis Nanda Prakash Monteiro, Shyna Quadras, Tanush Shetty
Abstract:
The association between human physique and morbidity and mortality resulting from coronary artery disease has been studied extensively over several decades. Multiple studies have also been done on the correlation between grade of atherosclerosis, coronary artery diseases and anthropometrical measurements. However, the number of autopsy-based studies drastically reduces this number. It has been suggested that while in living subjects, it would be expensive, difficult, and even harmful to subject them to imaging modalities like CT scans and procedures involving contrast media to study mild atherosclerosis, no such harm is encountered in study of autopsy cases. This autopsy-based study was aimed to correlate the anthropometric measurements and indices of obesity, such as waist circumference (WC), hip circumference (HC), body mass index (BMI) and waist hip ratio (WHR) with the degree of atherosclerosis in the right coronary artery (RCA), main branch of the left coronary artery (LCA) and the left anterior descending artery (LADA) in 95 South Indian origin victims of both the genders between the age of 18 years and 75 years. The grading of atherosclerosis was done according to criteria suggested by the American Heart Association. The study also analysed the correlation of the anthropometric measurements and indices of obesity with the number of coronaries affected with atherosclerosis in an individual. All the anthropometric measurements and the derived indices were found to be significantly correlated to each other in both the genders except for the age, which is found to have a significant correlation only with the WHR. In both the genders severe degree of atherosclerosis was commonly observed in LADA, followed by LCA and RCA. Grade of atherosclerosis in RCA is significantly related to the WHR in males. Grade of atherosclerosis in LCA and LADA is significantly related to the WHR in females. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in males. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in females. Anthropometric measurements/indices of obesity can be an effective means to identify high risk cases of atherosclerosis at an early stage that can be effective in reducing the associated cardiac morbidity and mortality. A person with anthropometric measurements suggestive of mild atherosclerosis can be advised to modify his lifestyle, along with decreasing his exposure to the other risk factors. Those with measurements suggestive of higher degree of atherosclerosis can be subjected to confirmatory procedures to start effective treatment.Keywords: atherosclerosis, coronary artery disease, indices, obesity
Procedia PDF Downloads 66