Search results for: dynamic capability approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17860

Search results for: dynamic capability approach

970 Family Carers' Experiences in Striving for Medical Care and Finding Their Solutions for Family Members with Mental Illnesses

Authors: Yu-Yu Wang, Shih-Hua Hsieh, Ru-Shian Hsieh

Abstract:

Wishes and choices being respected, and the right to be supported rather than coerced, have been internationally recognized as the human rights of persons with mental illness. In Taiwan, ‘coerced hospitalization’ has become difficult since the revision of the mental health legislation in 2007. Despite trend towards human rights, the real problem families face when their family members are in mental health crisis is the lack of alternative services. This study aims to explore: 1) When is hospitalization seen as the only solution by family members? 2) What are the barriers for arranging hospitalization, and how are they managed? 3) What have family carers learned, in their experiences of caring for their family members with mental illness? To answer these questions, qualitative approach was adopted, and focus group interviews were taken to collect data. This study includes 24 family carers. The main findings of this research include: First, hospital is the last resort for carers in helplessness. Family carers tend to do everything they could to provide care at home for their family members with mental illness. Carers seek hospitalization only when a patient’s behavior is too violent, weird, and/or abnormal, and beyond their ability to manage. Hospitalization, nevertheless, is never an easy choice. Obstacles emanate from the attitudes of the medical doctors, the restricted areas of ambulance service, and insufficient information from the carers’ part. On the other hand, with some professionals’ proactive assistance, access to medical care while in crisis becomes possible. Some family carers obtained help from the medical doctor, nurse, therapist and social workers. Some experienced good help from policemen, taxi drivers, and security guards at the hospital. The difficulty in accessing medical care prompts carers to work harder on assisting their family members with mental illness to stay in stable states. Carers found different ways of helping the ‘person’ to get along with the ‘illness’ and have better quality of life. Taking back ‘the right to control’ in utilizing medication, from passiveness to negotiating with medical doctors and seeking alternative therapies, are seen in many carers’ efforts. Besides, trying to maintain regular activities in daily life and play normal family roles are also experienced as important. Furthermore, talking with the patient as a person is also important. The authors conclude that in order to protect the human rights of persons with mental illness, it is crucial to make the medical care system more flexible and to make the services more humane: sufficient information should be provided and communicated, and efforts should be made to maintain the person’s social roles and to support the family.

Keywords: family carers, independent living, mental health crisis, persons with mental illness

Procedia PDF Downloads 306
969 Spatial Conceptualization in French and Italian Speakers: A Contrastive Approach in the Context of the Linguistic Relativity Theory

Authors: Camilla Simoncelli

Abstract:

The connection between language and cognition has been one of the main interests of linguistics from several years. According to the Sapir-Whorf Linguistic Relativity Theory, the way we perceive reality depends on the language we speak which in turn has a central role in the human cognition. This paper is in line with this research work with the aim of analyzing how language structures reflect on our cognitive abilities even in the description of space, which is generally considered as a human natural and universal domain. The main objective is to identify the differences in the encoding of spatial inclusion relationships in French and Italian speakers to make evidence that a significant variation exists at various levels even in two similar systems. Starting from the constitution a corpora, the first step of the study has been to establish the relevant complex prepositions marking an inclusion relation in French and Italian: au centre de, au cœur de, au milieu de, au sein de, à l'intérieur de and the opposition entre/parmi in French; al centro di, al cuore di, nel mezzo di, in seno a, all'interno di and the fra/tra contrast in Italian. These prepositions had been classified on the base of the type of Noun following them (e.g. mass nouns, concrete nouns, abstract nouns, body-parts noun, etc.) following the Collostructional Analysis of lexemes with the purpose of analyzing the preferred construction of each preposition comparing the relations construed. Comparing the Italian and the French results it has been possible to define the degree of representativeness of each target Noun for the chosen preposition studied. Lexicostatistics and Statistical Association Measures showed the values of attraction or repulsion between lexemes and a given preposition, highlighting which words are over-represented or under-represented in a specific context compared to the expected results. For instance, a Noun as Dibattiti has a negative value for the Italian Al cuore di (-1,91), but it has a strong positive representativeness for the corresponding French Au cœur de (+677,76). The value, positive or negative, is the result of a hypergeometric distribution law which displays the current use of some relevant nouns in relations of spatial inclusion by French and Italian speakers. Differences on the kind of location conceptualization denote syntactic and semantic constraints based on spatial features as well as on linguistic peculiarity, too. The aim of this paper is to demonstrate that the domain of spatial relations is basic to human experience and is linked to universally shared perceptual mechanisms which create mental representations depending on the language use. Therefore, linguistic coding strongly correlates with the way spatial distinctions are conceptualized for non-verbal tasks even in close language systems, like Italian and French.

Keywords: cognitive semantics, cross-linguistic variations, locational terms, non-verbal spatial representations

Procedia PDF Downloads 113
968 Density Functional Theory Study of the Surface Interactions between Sodium Carbonate Aerosols and Fission Products

Authors: Ankita Jadon, Sidi Souvi, Nathalie Girault, Denis Petitprez

Abstract:

The interaction of fission products (FP) with sodium carbonate (Na₂CO₃) aerosols is of a high safety concern because of their potential role in the radiological source term mitigation by FP trapping. In a sodium-cooled fast nuclear reactor (SFR) experiencing a severe accident, sodium (Na) aerosols can be formed after the ejection of the liquid Na coolant inside the containment. The surface interactions between these aerosols and different FP species have been investigated using ab-initio, density functional theory (DFT) calculations using Vienna ab-initio simulation package (VASP). In addition, an improved thermodynamic model has been proposed to treat DFT-VASP calculated energies to extrapolate them to temperatures and pressures of interest in our study. A combined experimental and theoretical chemistry study has been carried out to have both atomistic and macroscopic understanding of the chemical processes; the theoretical chemistry part of this approach is presented in this paper. The Perdew, Burke, and Ernzerhof functional were applied in combination with Grimme’s van der Waals correction to compute exchange-correlational energy at 0 K. Seven different surface cleavages were studied of Ƴ-Na₂CO₃ phase (stable at 603.15 K), it was found that for defect-free surfaces, the (001) facet is the most stable. Furthermore, calculations were performed to study surface defects and reconstructions on the ideal surface. All the studied surface defects were found to be less stable than the ideal surface. More than one adsorbate-ligand configurations were found to be stable confirming that FP vapors could be trapped on various adsorption sites. The calculated adsorption energies (Eads, eV) for the three most stable adsorption sites for I₂ are -1.33, -1.088, and -1.085. Moreover, the adsorption of the first molecule of I₂ changes the surface in a way which would favor stronger adsorption of a second molecule of I2 (Eads, eV = -1.261). For HI adsorption, the most favored reactions have the following Eads (eV) -1.982, -1.790, -1.683 implying that HI would be more reactive than I₂. In addition to FP species, adsorption of H₂O was also studied as the hydrated surface can have different reactivity than the bare surface. One thermodynamically favored site for H₂O adsorption was found with an Eads, eV of -0.754. Finally, the calculations of hydrated surfaces of Na₂CO₃ show that a layer of water adsorbed on the surface significantly reduces its affinity for iodine (Eads, eV = -1.066). According to the thermodynamic model built, the required partial pressure at 373 K to have adsorption of the first layer of iodine is 4.57×10⁻⁴ bar. The second layer will be adsorbed at partial pressures higher than 8.56×10⁻⁶ bar; a layer of water on the surface will increase these pressure almost ten folds to 3.71×10⁻³ bar. The surface interacts with elemental Cs with an Eads (eV) of -1.60, while interacts even strongly with CsI with an Eads (eV) of -2.39. More results on the interactions between Na₂CO₃ (001) and cesium-based FP will also be presented in this paper.

Keywords: iodine uptake, sodium carbonate surface, sodium-cooled fast nuclear reactor, DFT calculations, fission products

Procedia PDF Downloads 151
967 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 103
966 Construction and Analysis of Tamazight (Berber) Text Corpus

Authors: Zayd Khayi

Abstract:

This paper deals with the construction and analysis of the Tamazight text corpus. The grammatical structure of the Tamazight remains poorly understood, and a lack of comparative grammar leads to linguistic issues. In order to fill this gap, even though it is small, by constructed the diachronic corpus of the Tamazight language, and elaborated the program tool. In addition, this work is devoted to constructing that tool to analyze the different aspects of the Tamazight, with its different dialects used in the north of Africa, specifically in Morocco. It also focused on three Moroccan dialects: Tamazight, Tarifiyt, and Tachlhit. The Latin version was good choice because of the many sources it has. The corpus is based on the grammatical parameters and features of that language. The text collection contains more than 500 texts that cover a long historical period. It is free, and it will be useful for further investigations. The texts were transformed into an XML-format standardization goal. The corpus counts more than 200,000 words. Based on the linguistic rules and statistical methods, the original user interface and software prototype were developed by combining the technologies of web design and Python. The corpus presents more details and features about how this corpus provides users with the ability to distinguish easily between feminine/masculine nouns and verbs. The interface used has three languages: TMZ, FR, and EN. Selected texts were not initially categorized. This work was done in a manual way. Within corpus linguistics, there is currently no commonly accepted approach to the classification of texts. Texts are distinguished into ten categories. To describe and represent the texts in the corpus, we elaborated the XML structure according to the TEI recommendations. Using the search function may provide us with the types of words we would search for, like feminine/masculine nouns and verbs. Nouns are divided into two parts. The gender in the corpus has two forms. The neutral form of the word corresponds to masculine, while feminine is indicated by a double t-t affix (the prefix t- and the suffix -t), ex: Tarbat (girl), Tamtut (woman), Taxamt (tent), and Tislit (bride). However, there are some words whose feminine form contains only the prefix t- and the suffix –a, ex: Tasa (liver), tawja (family), and tarwa (progenitors). Generally, Tamazight masculine words have prefixes that distinguish them from other words. For instance, 'a', 'u', 'i', ex: Asklu (tree), udi (cheese), ighef (head). Verbs in the corpus are for the first person singular and plural that have suffixes 'agh','ex', 'egh', ex: 'ghrex' (I study), 'fegh' (I go out), 'nadagh' (I call). The program tool permits the following characteristics of this corpus: list of all tokens; list of unique words; lexical diversity; realize different grammatical requests. To conclude, this corpus has only focused on a small group of parts of speech in Tamazight language verbs, nouns. Work is still on the adjectives, prounouns, adverbs and others.

Keywords: Tamazight (Berber) language, corpus linguistic, grammar rules, statistical methods

Procedia PDF Downloads 66
965 From Design, Experience and Play Framework to Common Design Thinking Tools: Using Serious Modern Board Games

Authors: Micael Sousa

Abstract:

Board games (BGs) are thriving as new designs emerge from the hobby community to greater audiences all around the world. Although digital games are gathering most of the attention in game studies and serious games research fields, the post-digital movement helps to explain why in the world dominated by digital technologies, the analog experiences are still unique and irreplaceable to users, allowing innovation in new hybrid environments. The BG’s new designs are part of these post-digital and hybrid movements because they result from the use of powerful digital tools that enable the production and knowledge sharing about the BGs and their face-to-face unique social experiences. These new BGs, defined as modern by many authors, are providing innovative designs and unique game mechanics that are still not yet fully explored by the main serious games (SG) approaches. Even the most established frameworks settled to address SG, as fun games implemented to achieve predefined goals need more development, especially when considering modern BGs. Despite the many anecdotic perceptions, researchers are only now starting to rediscover BGs and demonstrating their potentials. They are proving that BGs are easy to adapt and to grasp by non-expert players in experimental approaches, with the possibility of easy-going adaptation to players’ profiles and serious objectives even during gameplay. Although there are many design thinking (DT) models and practices, their relations with SG frameworks are also underdeveloped, mostly because this is a new research field, lacking theoretical development and the systematization of the experimental practices. Using BG as case studies promise to help develop these frameworks. Departing from the Design, Experience, and Play (DPE) framework and considering the Common Design Think Tools (CDST), this paper proposes a new experimental framework for the adaptation and development of modern BG design for DT: the Design, Experience, and Play for Think (DPET) experimental framework. This is done through the systematization of the DPE and CDST approaches applied in two case studies, where two different sequences of adapted BG were employed to establish a DT collaborative process. These two sessions occurred with different participants and in different contexts, also using different sequences of games for the same DT approach. The first session took place at the Faculty of Economics at the University of Coimbra in a training session of serious games for project development. The second session took place in the Casa do Impacto through The Great Village Design Jam light. Both sessions had the same duration and were designed to progressively achieve DT goals, using BGs as SGs in a collaborative process. The results from the sessions show that a sequence of BGs, when properly adapted to address the DPET framework, can generate a viable and innovative process of collaborative DT that is productive, fun, and engaging. The DPET proposed framework intents to help establish how new SG solutions could be defined for new goals through flexible DT. Applications in other areas of research and development can also benefit from these findings.

Keywords: board games, design thinking, methodology, serious games

Procedia PDF Downloads 112
964 Towards the Rapid Synthesis of High-Quality Monolayer Continuous Film of Graphene on High Surface Free Energy Existing Plasma Modified Cu Foil

Authors: Maddumage Don Sandeepa Lakshad Wimalananda, Jae-Kwan Kim, Ji-Myon Lee

Abstract:

Graphene is an extraordinary 2D material that shows superior electrical, optical, and mechanical properties for the applications such as transparent contacts. Further, chemical vapor deposition (CVD) technique facilitates to synthesizing of large-area graphene, including transferability. The abstract is describing the use of high surface free energy (SFE) and nano-scale high-density surface kinks (rough) existing Cu foil for CVD graphene growth, which is an opposite approach to modern use of catalytic surfaces for high-quality graphene growth, but the controllable rough morphological nature opens new era to fast synthesis (less than the 50s with a short annealing process) of graphene as a continuous film over conventional longer process (30 min growth). The experiments were shown that high SFE condition and surface kinks on Cu(100) crystal plane existing Cu catalytic surface facilitated to synthesize graphene with high monolayer and continuous nature because it can influence the adsorption of C species with high concentration and which can be facilitated by faster nucleation and growth of graphene. The fast nucleation and growth are lowering the diffusion of C atoms to Cu-graphene interface, which is resulting in no or negligible formation of bilayer patches. High energy (500W) Ar plasma treatment (inductively Coupled plasma) was facilitated to form rough and high SFE existing (54.92 mJm-2) Cu foil. This surface was used to grow the graphene by using CVD technique at 1000C for 50s. The introduced kink-like high SFE existing point on Cu(100) crystal plane facilitated to faster nucleation of graphene with a high monolayer ratio (I2D/IG is 2.42) compared to another different kind of smooth morphological and low SFE existing Cu surfaces such as Smoother surface, which is prepared by the redeposit of Cu evaporating atoms during the annealing (RRMS is 13.3nm). Even high SFE condition was favorable to synthesize graphene with monolayer and continuous nature; It fails to maintain clean (surface contains amorphous C clusters) and defect-free condition (ID/IG is 0.46) because of high SFE of Cu foil at the graphene growth stage. A post annealing process was used to heal and overcome previously mentioned problems. Different CVD atmospheres such as CH4 and H2 were used, and it was observed that there is a negligible change in graphene nature (number of layers and continuous condition) but it was observed that there is a significant difference in graphene quality because the ID/IG ratio of the graphene was reduced to 0.21 after the post-annealing with H2 gas. Addition to the change of graphene defectiveness the FE-SEM images show there was a reduction of C cluster contamination of the surface. High SFE conditions are favorable to form graphene as a monolayer and continuous film, but it fails to provide defect-free graphene. Further, plasma modified high SFE existing surface can be used to synthesize graphene within 50s, and a post annealing process can be used to reduce the defectiveness.

Keywords: chemical vapor deposition, graphene, morphology, plasma, surface free energy

Procedia PDF Downloads 244
963 Determination of Genetic Markers, Microsatellites Type, Liked to Milk Production Traits in Goats

Authors: Mohamed Fawzy Elzarei, Yousef Mohammed Al-Dakheel, Ali Mohamed Alseaf

Abstract:

Modern molecular techniques, like single marker analysis for linked traits to these markers, can provide us with rapid and accurate genetic results. In the last two decades of the last century, the applications of molecular techniques were reached a faraway point in cattle, sheep, and pig. In goats, especially in our region, the application of molecular techniques is still far from other species. As reported by many researchers, microsatellites marker is one of the suitable markers for lie studies. The single marker linked to traits of interest is one technique allowed us to early select animals without the necessity for mapping the entire genome. Simplicity, applicability, and low cost of this technique gave this technique a wide range of applications in many areas of genetics and molecular biology. Also, this technique provides a useful approach for evaluating genetic differentiation, particularly in populations that are poorly known genetically. The expected breeding value (EBV) and yield deviation (YD) are considered as the most parameters used for studying the linkage between quantitative characteristics and molecular markers, since these values are raw data corrected for the non-genetic factors. A total of 17 microsatellites markers (from chromosomes 6, 14, 18, 20 and 23) were used in this study to search for areas that could be responsible for genetic variability for some milk traits and search of chromosomal regions that explain part of the phenotypic variance. Results of single-marker analyses were used to identify the linkage between microsatellite markers and variation in EBVs of these traits, Milk yield, Protein percentage, Fat percentage, Litter size and weight at birth, and litter size and weight at weaning. The estimates of the parameters from forward and backward solutions using stepwise regression procedure on milk yield trait, only two markers, OARCP9 and AGLA29, showed a highly significant effect (p≤0.01) in backward and forward solutions. The forward solution for different equations conducted that R2 of these equations were highly depending on only two partials regressions coefficient (βi,) for these markers. For the milk protein trait, four marker showed significant effect BMS2361, CSSM66 (p≤0.01), BMS2626, and OARCP9 (p≤0.05). By the other way, four markers (MCM147, BM1225, INRA006, andINRA133) showed highly significant effect (p≤0.01) in both backward and forward solutions in association with milk fat trait. For both litter size at birth and at weaning traits, only one marker (BM143(p≤0.01) and RJH1 (p≤0.05), respectively) showed a significant effect in backward and forward solutions. The estimates of the parameters from forward and backward solution using stepwise regression procedure on litter weight at birth (LWB) trait only one marker (MCM147) showed highly significant effect (p≤0.01) and two marker (ILSTS011, CSSM66) showed a significant effect (p≤0.05) in backward and forward solutions.

Keywords: microsatellites marker, estimated breeding value, stepwise regression, milk traits

Procedia PDF Downloads 93
962 Evaluation of River Meander Geometry Using Uniform Excess Energy Theory and Effects of Climate Change on River Meandering

Authors: Youssef I. Hafez

Abstract:

Since ancient history rivers have been the fostering and favorite place for people and civilizations to live and exist along river banks. However, due to floods and droughts, especially sever conditions due to global warming and climate change, river channels are completely evolving and moving in the lateral direction changing their plan form either through straightening of curved reaches (meander cut-off) or increasing meandering curvature. The lateral shift or shrink of a river channel affects severely the river banks and the flood plain with tremendous impact on the surrounding environment. Therefore, understanding the formation and the continual processes of river channel meandering is of paramount importance. So far, in spite of the huge number of publications about river-meandering, there has not been a satisfactory theory or approach that provides a clear explanation of the formation of river meanders and the mechanics of their associated geometries. In particular two parameters are often needed to describe meander geometry. The first one is a scale parameter such as the meander arc length. The second is a shape parameter such as the maximum angle a meander path makes with the channel mean down path direction. These two parameters, if known, can determine the meander path and geometry as for example when they are incorporated in the well known sine-generated curve. In this study, a uniform excess energy theory is used to illustrate the origin and mechanics of formation of river meandering. This theory advocates that the longitudinal imbalance between the valley and channel slopes (with the former is greater than the second) leads to formation of curved meander channel in order to reduce the excess energy through its expenditure as transverse energy loss. Two relations are developed based on this theory; one for the determination of river channel radius of curvature at the bend apex (shape parameter) and the other for the determination of river channel sinuosity. The sinuosity equation tested very well when applied to existing available field data. In addition, existing model data were used to develop a relation between the meander arc length and the Darcy-Weisback friction factor. Then, the meander wave length was determined from the equations of the arc length and the sinuosity. The developed equation compared well with available field data. Effects of the transverse bed slope and grain size on river channel sinuosity are addressed. In addition, the concept of maximum channel sinuosity is introduced in order to explain the changes of river channel plan form due to changes in flow discharges and sediment loads induced by global warming and climate changes.

Keywords: river channel meandering, sinuosity, radius of curvature, meander arc length, uniform excess energy theory, transverse energy loss, transverse bed slope, flow discharges, sediment loads, grain size, climate change, global warming

Procedia PDF Downloads 223
961 Evaluating Energy Transition of a complex of buildings in a historic site of Rome toward Zero-Emissions for a Sustainable Future

Authors: Silvia Di Turi, Nicolandrea Calabrese, Francesca Caffari, Giulia Centi, Francesca Margiotta, Giovanni Murano, Laura Ronchetti, Paolo Signoretti, Lisa Volpe, Domenico Palladino

Abstract:

Recent European policies have been set ambitious targets aimed at significantly reducing CO2 emissions by 2030, with a long-term vision of transforming existing buildings into Zero-Emissions Buildings (ZEmB) by 2050. This vision represents a key point for the energy transition as the whole building stock currently accounts for 36% of total energy consumption across the Europe, mainly due to their poor energy performance. The challenge towards Zero-Emissions Buildings is particularly felt in Italy, where a significant number of buildings with historical significance or situated within protected/constrained areas can be found. Furthermore, an estimated 70% of the national building stock are built before 1976, indicating a widespread issue of poor energy performance. Addressing the energy ineƯiciency of these buildings is crucial to refining a comprehensive energy renovation approach aimed at facilitating their energy transition. In this framework the current study focuses on analysing a challenging complex of buildings to be totally restored through significant energy renovation interventions. The goal is to recover these disused buildings situated in a significant archaeological zone of Rome, contributing to the restoration and reintegration of this historically valuable site, while also oƯering insights useful for achieving zeroemission requirements for buildings within such contexts. In pursuit of meeting the stringent zero-emission requirements, a comprehensive study was carried out to assess the complex of buildings, envisioning substantial renovation measures on building envelope and plant systems and incorporating renewable energy system solutions, always respecting and preserving the historic site. An energy audit of the complex of buildings was performed to define the actual energy consumption for each energy service by adopting the hourly calculation methods. Subsequently, significant energy renovation interventions on both building envelope and mechanical systems have been examined respecting the historical value and preservation of site. These retrofit strategies have been investigated with threefold aims: 1) to recover the existing buildings ensuring the energy eƯiciency of the whole complex of buildings, 2) to explore which solutions have allowed achieving and facilitating the ZEmB status, 3) to balance the energy transition requirements with the sustainable aspect in order to preserve the historic value of the buildings and site. This study has pointed out the potentiality and the technical challenges associated with implementing renovation solutions for such buildings, representing one of the first attempt towards realizing this ambitious target for this type of building.

Keywords: energy conservation and transition, complex of buildings in historic site, zero-emission buildings, energy efficiency recovery

Procedia PDF Downloads 76
960 Tourism Management of the Heritage and Archaeological Sites in Egypt

Authors: Sabry A. El Azazy

Abstract:

The archaeological heritage sites are one of the most important touristic attractions worldwide. Egypt has various archaeological sites and historical locations that are classified within the list of the archaeological heritage destinations in the world, such as Cairo, Luxor, Aswan, Alexandria, and Sinai. This study focuses on how to manage the archaeological sites and provide them with all services according to the traveler's needs. Tourism management depends on strategic planning for supporting the national economy and sustainable development. Additionally, tourism management has to utilize highly effective standards of security, promotion, advertisement, sales, and marketing while taking into consideration the preservation of monuments. In Egypt, the archaeological heritage sites must be well-managed and protected, which would assist tourism management, especially in times of crisis. Recently, the monumental places and archeological heritage sites were affected by unstable conditions and were threatened. It is essential to focus on preserving our heritage. Moreover, more efforts and cooperation between the tourism organizations and ministry of archaeology have to be done in order to protect the archaeology and promote the tourism industry. Methodology: Qualitative methods have been used as the overall approach to this study. Interviews and observations have provided the researcher with the required in-depth insight to the research subject. The researcher was a lecturer of tourist guidance that allows visiting all historical sites in Egypt. Additionally, the researcher had the privilege to communicate with tourism specialists and attend meetings, conferences, and events that were focused on the research subject. Objectives: The main purpose of the research was gaining information in order to develop theoretical research on how to effectively benefit out of those historical sights both economically and culturally, and pursue further researches and scientific studies to be well-suited for tourism and hospitality sector. The researcher works hard to present further studies in a field related to tourism and archaeological heritage using previous experience. Pursing this course of study enables the researcher to acquire the necessary abilities and competencies to achieve the set goal successfully. Results: The professional tourism management focus on making Egypt one of the most important destinations in the world, and provide the heritage and archaeological sites with all services that will place those locations into the international map of tourism. Tourists interested in visiting Egypt and making tourism flourish supports and strengths Egypt's national economy and the local community, taking into consideration preserving our heritage and archaeology. Conclusions: Egypt has many tourism attractions represented in the heritage, archaeological sites, and touristic places. These places need more attention and efforts to be included in tourism programs and be opened for visitors from all over the world. These efforts will encourage both local and international tourism to see our great civilization and provide different touristic activities.

Keywords: archaeology, archaeological sites, heritage, ministry of archaeology, national economy, touristic attractions, tourism management, tourism organizations

Procedia PDF Downloads 144
959 Understanding Project Failures in Construction: The Critical Impact of Financial Capacity

Authors: Nnadi Ezekiel Oluwaseun Ejiofor

Abstract:

This research investigates the effects of poor cost estimation, material cost variations, and payment punctuality on the financial health and execution of construction projects in Nigeria. To achieve the objectives of the study, a quantitative research approach was employed, and data was gathered through an online survey of 74 construction industry professionals consisting of quantity surveyors, contractors, and other professionals. The study surveyed input on cost estimation errors, price fluctuations, and payment delays, among other factors. The responses of the respondents were analyzed using a five-point Likert scale and the Relative Importance Index (RII). The findings demonstrated that the errors in cost estimating in the Bill of Quantity (BOQ) have a high degree of negative impact on the reputation and image of the participants in the projects. The greatest effect was experienced on the likelihood of obtaining future endeavors for contractors (mean value = 3.42), followed by the likelihood of obtaining new commissions by quantity surveyors (mean value = 3.40). The level of inaccuracy in costing that undershoots exposes them to risks was most serious in terms of easement of construction and effects of shortage of funds to pursue bankruptcy (hence fears of mean value = 3.78). There was also considerable financial damage as a result of cost underestimation, whereby contractors suffered the worst loss in profit (mean value = 3.88). Every expense comes with its own peculiar risk and uncertainty. Pressure on the cost of materials and every other expense attributed to the building and completion of a structure adds risks to the performance figures of a project. The greatest weight (mean importance score = 4.92) was attributed to issues like market inflation in building materials, while the second greatest weight (mean importance score = 4.76) was due to increased transportation charges. On the other hand, delays in payments arising from issues of the clients like poor availability of funds (RII=0.71) and contracting issues such as disagreements on the valuation of works done (RII=0.72) or other reasons were also found to lead to project delays and additional costs. The results affirm the importance of proper cost estimation on the health of organization finances and project risks and finishes within set time limits. As for the suggestions, it is proposed to progress on the methods of costing, engender better communications with the stakeholders, and manage the delays by way of contracting and financial control. This study enhances the existing literature on construction project management by suggesting ways to deal with adverse cost inaccuracies and availability of materials due to delays in payments which, if addressed, would greatly improve the economic performance of the construction business.

Keywords: cost estimation, construction project management, material price fluctuations, payment delays, financial impact

Procedia PDF Downloads 8
958 Temporal and Spatio-Temporal Stability Analyses in Mixed Convection of a Viscoelastic Fluid in a Porous Medium

Authors: P. Naderi, M. N. Ouarzazi, S. C. Hirata, H. Ben Hamed, H. Beji

Abstract:

The stability of mixed convection in a Newtonian fluid medium heated from below and cooled from above, also known as the Poiseuille-Rayleigh-Bénard problem, has been extensively investigated in the past decades. To our knowledge, mixed convection in porous media has received much less attention in the published literature. The present paper extends the mixed convection problem in porous media for the case of a viscoelastic fluid flow owing to its numerous environmental and industrial applications such as the extrusion of polymer fluids, solidification of liquid crystals, suspension solutions and petroleum activities. Without a superimposed through-flow, the natural convection problem of a viscoelastic fluid in a saturated porous medium has already been treated. The effects of the viscoelastic properties of the fluid on the linear and nonlinear dynamics of the thermoconvective instabilities have also been treated in this work. Consequently, the elasticity of the fluid can lead either to a Hopf bifurcation, giving rise to oscillatory structures in the strongly elastic regime, or to a stationary bifurcation in the weakly elastic regime. The objective of this work is to examine the influence of the main horizontal flow on the linear and characteristics of these two types of instabilities. Under the Boussinesq approximation and Darcy's law extended to a viscoelastic fluid, a temporal stability approach shows that the conditions for the appearance of longitudinal rolls are identical to those found in the absence of through-flow. For the general three-dimensional (3D) perturbations, a Squire transformation allows the deduction of the complex frequencies associated with the 3D problem using those obtained by solving the two-dimensional one. The numerical resolution of the eigenvalue problem concludes that the through-flow has a destabilizing effect and selects a convective configuration organized in purely transversal rolls which oscillate in time and propagate in the direction of the main flow. In addition, by using the mathematical formalism of absolute and convective instabilities, we study the nature of unstable three-dimensional disturbances. It is shown that for a non-vanishing through-flow, general three-dimensional instabilities are convectively unstable which means that in the absence of a continuous noise source these instabilities are drifted outside the porous medium, and no long-term pattern is observed. In contrast, purely transversal rolls may exhibit a transition to absolute instability regime and therefore affect the porous medium everywhere including in the absence of a noise source. The absolute instability threshold, the frequency and the wave number associated with purely transversal rolls are determined as a function of the Péclet number and the viscoelastic parameters. Results are discussed and compared to those obtained from laboratory experiments in the case of Newtonian fluids.

Keywords: instability, mixed convection, porous media, and viscoelastic fluid

Procedia PDF Downloads 341
957 Exploring the Motivations That Drive Paper Use in Clinical Practice Post-Electronic Health Record Adoption: A Nursing Perspective

Authors: Sinead Impey, Gaye Stephens, Lucy Hederman, Declan O'Sullivan

Abstract:

Continued paper use in the clinical area post-Electronic Health Record (EHR) adoption is regularly linked to hardware and software usability challenges. Although paper is used as a workaround to circumvent challenges, including limited availability of a computer, this perspective does not consider the important role paper, such as the nurses’ handover sheet, play in practice. The purpose of this study is to confirm the hypothesis that paper use post-EHR adoption continues as paper provides both a cognitive tool (that assists with workflow) and a compensation tool (to circumvent usability challenges). Distinguishing the different motivations for continued paper-use could assist future evaluations of electronic record systems. Methods: Qualitative data were collected from three clinical care environments (ICU, general ward and specialist day-care) who used an electronic record for at least 12 months. Data were collected through semi-structured interviews with 22 nurses. Data were transcribed, themes extracted using an inductive bottom-up coding approach and a thematic index constructed. Findings: All nurses interviewed continued to use paper post-EHR adoption. While two distinct motivations for paper use post-EHR adoption were confirmed by the data - paper as a cognitive tool and paper as a compensation tool - further finding was that there was an overlap between the two uses. That is, paper used as a compensation tool could also be adapted to function as a cognitive aid due to its nature (easy to access and annotate) or vice versa. Rather than present paper persistence as having two distinctive motivations, it is more useful to describe it as presenting on a continuum with compensation tool and cognitive tool at either pole. Paper as a cognitive tool referred to pages such as nurses’ handover sheet. These did not form part of the patient’s record, although information could be transcribed from one to the other. Findings suggest that although the patient record was digitised, handover sheets did not fall within this remit. These personal pages continued to be useful post-EHR adoption for capturing personal notes or patient information and so continued to be incorporated into the nurses’ work. Comparatively, the paper used as a compensation tool, such as pre-printed care plans which were stored in the patient's record, appears to have been instigated in reaction to usability challenges. In these instances, it is expected that paper use could reduce or cease when the underlying problem is addressed. There is a danger that as paper affords nurses a temporary information platform that is mobile, easy to access and annotate, its use could become embedded in clinical practice. Conclusion: Paper presents a utility to nursing, either as a cognitive or compensation tool or combination of both. By fully understanding its utility and nuances, organisations can avoid evaluating all incidences of paper use (post-EHR adoption) as arising from usability challenges. Instead, suitable remedies for paper-persistence can be targeted at the root cause.

Keywords: cognitive tool, compensation tool, electronic record, handover sheet, nurse, paper persistence

Procedia PDF Downloads 442
956 Polarimetric Study of System Gelatin / Carboxymethylcellulose in the Food Field

Authors: Sihem Bazid, Meriem El Kolli, Aicha Medjahed

Abstract:

Proteins and polysaccharides are the two types of biopolymers most frequently used in the food industry to control the mechanical properties and structural stability and organoleptic properties of the products. The textural and structural properties of these two types of blend polymers depend on their interaction and their ability to form organized structures. From an industrial point of view, a better understanding of mixtures protein / polysaccharide is an important issue since they are already heavily involved in processed food. It is in this context that we have chosen to work on a model system composed of a fibrous protein mixture (gelatin)/anionic polysaccharide (sodium carboxymethylcellulose). Gelatin, one of the most popular biopolymers, is widely used in food, pharmaceutical, cosmetic and photographic applications, because of its unique functional and technological properties. Sodium Carboxymethylcellulose (NaCMC) is an anionic linear polysaccharide derived from cellulose. It is an important industrial polymer with a wide range of applications. The functional properties of this anionic polysaccharide can be modified by the presence of proteins with which it might interact. Another factor may also manage the interaction of protein-polysaccharide mixtures is the triple helix of the gelatin. Its complex synthesis method results in an extracellular assembly containing several levels. Collagen can be in a soluble state or associate into fibrils, which can associate in fiber. Each level corresponds to an organization recognized by the cellular and metabolic system. Gelatin allows this approach, the formation of gelatin gel has triple helical folding of denatured collagen chains, this gel has been the subject of numerous studies, and it is now known that the properties depend only on the rate of triple helices forming the network. Chemical modification of this system is quite controlled. Observe the dynamics of the triple helix may be relevant in understanding the interactions involved in protein-polysaccharides mixtures. Gelatin is central to any industrial process, understand and analyze the molecular dynamics induced by the triple helix in the transitions gelatin, can have great economic importance in all fields and especially the food. The goal is to understand the possible mechanisms involved depending on the nature of the mixtures obtained. From a fundamental point of view, it is clear that the protective effect of NaCMC on gelatin and conformational changes of the α helix are strongly influenced by the nature of the medium. Our goal is to minimize the maximum the α helix structure changes to maintain more stable gelatin and protect against denaturation that occurs during such conversion processes in the food industry. In order to study the nature of interactions and assess the properties of mixtures, polarimetry was used to monitor the optical parameters and to assess the rate of helicity gelatin.

Keywords: gelatin, sodium carboxymethylcellulose, interaction gelatin-NaCMC, the rate of helicity, polarimetry

Procedia PDF Downloads 312
955 Understanding the Diversity of Antimicrobial Resistance among Wild Animals, Livestock and Associated Environment in a Rural Ecosystem in Sri Lanka

Authors: B. M. Y. I. Basnayake, G. G. T. Nisansala, P. I. J. B. Wijewickrama, U. S. Weerathunga, K. W. M. Y. D. Gunasekara, N. K. Jayasekera, A. W. Kalupahana, R. S. Kalupahana, A. Silva- Fletcher, K. S. A. Kottawatta

Abstract:

Antimicrobial resistance (AMR) has attracted significant attention worldwide as an emerging threat to public health. Understanding the role of livestock and wildlife with the shared environment in the maintenance and transmission of AMR is of utmost importance due to its interactions with humans for combating the issue in one health approach. This study aims to investigate the extent of AMR distribution among wild animals, livestock, and environment cohabiting in a rural ecosystem in Sri Lanka: Hambegamuwa. One square km area at Hambegamuwa was mapped using GPS as the sampling area. The study was conducted for a period of five months from November 2020. Voided fecal samples were collected from 130 wild animals, 123 livestock: buffalo, cattle, chicken, and turkey, with 36 soil and 30 water samples associated with livestock and wildlife. From the samples, Escherichia coli (E. coli) was isolated, and their AMR profiles were investigated for 12 antimicrobials using the disk diffusion method following the CLSI standard. Seventy percent (91/130) of wild animals, 93% (115/123) of livestock, 89% (32/36) of soil, and 63% (19/30) of water samples were positive for E. coli. Maximum of two E. coli from each sample to a total of 467 were tested for the sensitivity of which 157, 208, 62, and 40 were from wild animals, livestock, soil, and water, respectively. The highest resistance in E. coli from livestock (13.9%) and wild animals (13.3%) was for ampicillin, followed by streptomycin. Apart from that, E. coli from livestock and wild animals revealed resistance mainly against tetracycline, cefotaxime, trimethoprim/ sulfamethoxazole, and nalidixic acid at levels less than 10%. Ten cefotaxime resistant E. coli were reported from wild animals, including four elephants, two land monitors, a pigeon, a spotted dove, and a monkey which was a significant finding. E. coli from soil samples reflected resistance primarily against ampicillin, streptomycin, and tetracycline at levels less than in livestock/wildlife. Two water samples had cefotaxime resistant E. coli as the only resistant isolates out of 30 water samples tested. Of the total E. coli isolates, 6.4% (30/467) was multi-drug resistant (MDR) which included 18, 9, and 3 isolates from livestock, wild animals, and soil, respectively. Among 18 livestock MDRs, the highest (13/ 18) was from poultry. Nine wild animal MDRs were from spotted dove, pigeon, land monitor, and elephant. Based on CLSI standard criteria, 60 E. coli isolates, of which 40, 16, and 4 from livestock, wild animal, and environment, respectively, were screened for Extended Spectrum β-Lactamase (ESBL) producers. Despite being a rural ecosystem, AMR and MDR are prevalent even at low levels. E. coli from livestock, wild animals, and the environment reflected a similar spectrum of AMR where ampicillin, streptomycin, tetracycline, and cefotaxime being the predominant antimicrobials of resistance. Wild animals may have acquired AMR via direct contact with livestock or via the environment, as antimicrobials are rarely used in wild animals. A source attribution study including the effects of the natural environment to study AMR can be proposed as this less contaminated rural ecosystem alarms the presence of AMR.

Keywords: AMR, Escherichia coli, livestock, wildlife

Procedia PDF Downloads 216
954 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography

Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner

Abstract:

Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.

Keywords: CBCT, C-arm, reconstruction, trajectory optimization

Procedia PDF Downloads 132
953 The Community Stakeholders’ Perspectives on Sexual Health Education for Young Adolescents in Western New York, USA: A Qualitative Descriptive Study

Authors: Sadandaula Rose Muheriwa Matemba, Alexander Glazier, Natalie M. LeBlanc

Abstract:

In the United States, up to 10% of girls and 22 % of boys 10-14 years have had sex, 5% of them had their first sex before 11 years, and the age of first sexual encounter is reported to be 8 years. Over 4,000 adolescent girls, 10-14 years, become pregnant every year, and 2.6% of the abortions in 2019 were among adolescents below 15 years. Despite these negative outcomes, little research has been conducted to understand the sexual health education offered to young adolescents ages 10-14. Early sexual health education is one of the most effective strategies to help lower the rate of early pregnancies, HIV infections, and other sexually transmitted. Such knowledge is necessary to inform best practices for supporting the healthy sexual development of young adolescents and prevent adverse outcomes. This qualitative descriptive study was conducted to explore the community stakeholders’ experiences in sexual health education for young adolescents ages 10-14 and ascertain the young adolescents’ sexual health support needs. Maximum variation purposive sampling was used to recruit a total sample of 13 community stakeholders, including health education teachers, members of youth-based organizations, and Adolescent Clinic providers in Rochester, New York State, in the United States of America from April to June 2022. Data were collected through semi-structured individual in-depth interviews and were analyzed using MAXQDA following a conventional content analysis approach. Triangulation, team analysis, and respondent validation to enhance rigor were also employed to enhance study rigor. The participants were predominantly female (92.3%) and comprised of Caucasians (53.8%), Black/African Americans (38.5%), and Indian-American (7.7%), with ages ranging from 23-59. Four themes emerged: the perceived need for early sexual health education, preferred timing to initiate sexual health conversations, perceived age-appropriate content for young adolescents, and initiating sexual health conversations with young adolescents. The participants described encouraging and concerning experiences. Most participants were concerned that young adolescents are living in a sexually driven environment and are not given the sexual health education they need, even though they are open to learning sexual health materials. There was consensus on the need to initiate sexual health conversations early at 4 years or younger, standardize sexual health education in schools and make age-appropriate sexual health education progressive. These results show that early sexual health education is essential if young adolescents are to delay sexual debut, prevent early pregnancies, and if the goal of ending the HIV epidemic is to be achieved. However, research is needed on a larger scale to understand how best to implement sexual health education among young adolescents and to inform interventions for implementing contextually-relevant sexuality education for this population. These findings call for increased multidisciplinary efforts in promoting early sexual health education for young adolescents.

Keywords: community stakeholders’ perspectives, sexual development, sexual health education, young adolescents

Procedia PDF Downloads 78
952 The Influence of Argumentation Strategy on Student’s Web-Based Argumentation in Different Scientific Concepts

Authors: Xinyue Jiao, Yu-Ren Lin

Abstract:

Argumentation is an essential aspect of scientific thinking which has been widely concerned in recent reform of science education. The purpose of the present studies was to explore the influences of two variables termed ‘the argumentation strategy’ and ‘the kind of science concept’ on student’s web-based argumentation. The first variable was divided into either monological (which refers to individual’s internal discourse and inner chain reasoning) or dialectical (which refers to dialogue interaction between/among people). The other one was also divided into either descriptive (i.e., macro-level concept, such as phenomenon can be observed and tested directly) or theoretical (i.e., micro-level concept which is abstract, and cannot be tested directly in nature). The present study applied the quasi-experimental design in which 138 7th grade students were invited and then assigned to either monological group (N=70) or dialectical group (N=68) randomly. An argumentation learning program called ‘the PWAL’ was developed to improve their scientific argumentation abilities, such as arguing from multiple perspectives and based on scientific evidence. There were two versions of PWAL created. For the individual version, students can propose argument only through knowledge recall and self-reflecting process. On the other hand, the students were allowed to construct arguments through peers’ communication in the collaborative version. The PWAL involved three descriptive science concept-based topics (unit 1, 3 and 5) and three theoretical concept-based topics (unit 2, 4 and 6). Three kinds of scaffoldings were embedded into the PWAL: a) argument template, which was used for constructing evidence-based argument; b) the model of the Toulmin’s TAP, which shows the structure and elements of a sound argument; c) the discussion block, which enabled the students to review what had been proposed during the argumentation. Both quantitative and qualitative data were collected and analyzed. An analytical framework for coding students’ arguments proposed in the PWAL was constructed. The results showed that the argumentation approach has a significant effect on argumentation only in theoretical topics (f(1, 136)=48.2, p < .001, η2=2.62). The post-hoc analysis showed the students in the collaborative group perform significantly better than the students in the individual group (mean difference=2.27). However, there is no significant difference between the two groups regarding their argumentation in descriptive topics. Secondly, the students made significant progress in the PWAL from the earlier descriptive or theoretical topic to the later one. The results enabled us to conclude that the PWAL was effective for students’ argumentation. And the students’ peers’ interaction was essential for students to argue scientifically especially for the theoretical topic. The follow-up qualitative analysis showed student tended to generate arguments through critical dialogue interactions in the theoretical topic which promoted them to use more critiques and to evaluate and co-construct each other’s arguments. More explanations regarding the students’ web-based argumentation and the suggestions for the development of web-based science learning were proposed in our discussions.

Keywords: argumentation, collaborative learning, scientific concepts, web-based learning

Procedia PDF Downloads 104
951 Recognition of Spelling Problems during the Text in Progress: A Case Study on the Comments Made by Portuguese Students Newly Literate

Authors: E. Calil, L. A. Pereira

Abstract:

The acquisition of orthography is a complex process, involving both lexical and grammatical questions. This learning occurs simultaneously with the domain of multiple textual aspects (e.g.: graphs, punctuation, etc.). However, most of the research on orthographic acquisition focus on this acquisition from an autonomous point of view, separated from the process of textual production. This means that their object of analysis is the production of words selected by the researcher or the requested sentences in an experimental and controlled setting. In addition, the analysis of the Spelling Problems (SP) are identified by the researcher on the sheet of paper. Considering the perspective of Textual Genetics, from an enunciative approach, this study will discuss the SPs recognized by dyads of newly literate students, while they are writing a text collaboratively. Six proposals of textual production were registered, requested by a 2nd year teacher of a Portuguese Primary School between January and March 2015. In our case study we discuss the SPs recognized by the dyad B and L (7 years old). We adopted as a methodological tool the Ramos System audiovisual record. This system allows real-time capture of the text in process and of the face-to-face dialogue between both students and their teacher, and also captures the body movements and facial expressions of the participants during textual production proposals in the classroom. In these ecological conditions of multimodal registration of collaborative writing, we could identify the emergence of SP in two dimensions: i. In the product (finished text): SP identification without recursive graphic marks (without erasures) and the identification of SPs with erasures, indicating the recognition of SP by the student; ii. In the process (text in progress): identification of comments made by students about recognized SPs. Given this, we’ve analyzed the comments on identified SPs during the text in progress. These comments characterize a type of reformulation referred to as Commented Oral Erasure (COE). The COE has two enunciative forms: Simple Comment (SC) such as ' 'X' is written with 'Y' '; or Unfolded Comment (UC), such as ' 'X' is written with 'Y' because...'. The spelling COE may also occur before or during the SP (Early Spelling Recognition - ESR) or after the SP has been entered (Later Spelling Recognition - LSR). There were 631 words entered in the 6 stories written by the B-L dyad, 145 of them containing some type of SP. During the text in progress, the students recognized orally 174 SP, 46 of which were identified in advance (ESRs) and 128 were identified later (LSPs). If we consider that the 88 erasure SPs in the product indicate some form of SP recognition, we can observe that there were twice as many SPs recognized orally. The ESR was characterized by SC when students asked their colleague or teacher how to spell a given word. The LSR presented predominantly UC, verbalizing meta-orthographic arguments, mostly made by L. These results indicate that writing in dyad is an important didactic strategy for the promotion of metalinguistic reflection, favoring the learning of spelling.

Keywords: collaborative writing, erasure, learning, metalinguistic awareness, spelling, text production

Procedia PDF Downloads 163
950 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin

Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa

Abstract:

Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.

Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®

Procedia PDF Downloads 131
949 Large-Scale Production of High-Performance Fiber-Metal-Laminates by Prepreg-Press-Technology

Authors: Christian Lauter, Corin Reuter, Shuang Wu, Thomas Troester

Abstract:

Lightweight construction became more and more important over the last decades in several applications, e.g. in the automotive or aircraft sector. This is the result of economic and ecological constraints on the one hand and increasing safety and comfort requirements on the other hand. In the field of lightweight design, different approaches are used due to specific requirements towards the technical systems. The use of endless carbon fiber reinforced plastics (CFRP) offers the largest weight saving potential of sometimes more than 50% compared to conventional metal-constructions. However, there are very limited industrial applications because of the cost-intensive manufacturing of the fibers and production technologies. Other disadvantages of pure CFRP-structures affect the quality control or the damage resistance. One approach to meet these challenges is hybrid materials. This means CFRP and sheet metal are combined on a material level. Therefore, new opportunities for innovative process routes are realizable. Hybrid lightweight design results in lower costs due to an optimized material utilization and the possibility to integrate the structures in already existing production processes of automobile manufacturers. In recent and current research, the advantages of two-layered hybrid materials have been pointed out, i.e. the possibility to realize structures with tailored mechanical properties or to divide the curing cycle of the epoxy resin into two steps. Current research work at the Chair for Automotive Lightweight Design (LiA) at the Paderborn University focusses on production processes for fiber-metal-laminates. The aim of this work is the development and qualification of a large-scale production process for high-performance fiber-metal-laminates (FML) for industrial applications in the automotive or aircraft sector. Therefore, the prepreg-press-technology is used, in which pre-impregnated carbon fibers and sheet metals are formed and cured in a closed, heated mold. The investigations focus e.g. on the realization of short process chains and cycle times, on the reduction of time-consuming manual process steps, and the reduction of material costs. This paper gives an overview over the considerable steps of the production process in the beginning. Afterwards experimental results are discussed. This part concentrates on the influence of different process parameters on the mechanical properties, the laminate quality and the identification of process limits. Concluding the advantages of this technology compared to conventional FML-production-processes and other lightweight design approaches are carried out.

Keywords: composite material, fiber-metal-laminate, lightweight construction, prepreg-press-technology, large-series production

Procedia PDF Downloads 240
948 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 232
947 Land, History and Housing: Colonial Legacies and Land Tenure in Kuala Lumpur

Authors: Nur Fareza Mustapha

Abstract:

Solutions to policy problems need to be curated to the local context, taking into account the trajectory of the local development path to ensure its efficacy. For Kuala Lumpur, rapid urbanization and migration into the city for the past few decades have increased the demand for housing to accommodate a growing urban population. As a critical factor affecting housing affordability, land supply constraints have been attributed to intensifying market pressures, which grew in tandem with the demands of urban development, along with existing institutional constraints in the governance of land. While demand-side pressures are inevitable given the fixed supply of land, supply-side constraints in regulations distort markets and if addressed inappropriately, may lead to mistargeted policy interventions. Given Malaysia’s historical development, regulatory barriers for land may originate from the British colonial period, when many aspects of the current laws governing tenure were introduced and formalized, and henceforth, became engrained in the system. This research undertakes a postcolonial institutional analysis approach to uncover the causal mechanism driving the evolution of land tenure systems in post-colonial Kuala Lumpur. It seeks to determine the sources of these shifts, focusing on the incentives and bargaining positions of actors during periods of institutional flux/change. It aims to construct a conceptual framework to further this understanding and to elucidate how this historical trajectory affects current access to urban land markets for housing. Archival analysis is used to outline and analyse the evolution of land tenure systems in Kuala Lumpur while stakeholder interviews are used to analyse its impact on the current urban land market, with a particular focus on the provision of and access to affordable housing in the city. Preliminary findings indicate that many aspects of the laws governing tenure that were introduced and formalized during the British colonial period have endured until the present day. Customary rules of tenure were displaced by rules following a European tradition, which found legitimacy through a misguided interpretation of local laws regarding the ownership of land. Colonial notions of race and its binary view of native vs. non-natives have also persisted in the construction and implementation of current legislation regarding land tenure. More concrete findings from this study will generate a more nuanced understanding of the regulatory land supply constraints in Kuala Lumpur, taking into account both the long and short term spatial and temporal processes that affect how these rules are created, implemented and enforced.

Keywords: colonial discourse, historical institutionalism, housing, land policy, post-colonial city

Procedia PDF Downloads 128
946 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 122
945 The Lighthouse Project: Recent Initiatives to Navigate Australian Families Safely Through Parental Separation

Authors: Kathryn McMillan

Abstract:

A recent study of 8500 adult Australians aged 16 and over revealed 62% had experienced childhood maltreatment. In response to multiple recommendations by bodies such as the Australian Law Reform Commission, parliamentary reports and stakeholder input, a number of key initiatives have been developed to grapple with the difficulties of a federal-state system and to screen and triage high-risk families navigating their way through the court system. The Lighthouse Project (LHP) is a world-first initiative of the Federal Circuit and Family Courts in Australia (FCFOCA) to screen family law litigants for major risk factors, including family violence, child abuse, alcohol or substance abuse and mental ill-health at the point of filing in all applications that seek parenting orders. It commenced on 7 December 2020 on a pilot basis but has now been expanded to 15 registries across the country. A specialist risk screen, Family DOORS, Triage has been developed – focused on improving the safety and wellbeing of families involved in the family law system safety planning and service referral, and ¬ differentiated case management based on risk level, with the Evatt List specifically designed to manage the highest risk cases. Early signs are that this approach is meeting the needs of families with multiple risks moving through the Court system. Before the LHP, there was no data available about the prevalence of risk factors experienced by litigants entering the family courts and it was often assumed that it was the litigation process that was fueling family violence and other risks such as suicidality. Data from the 2022 FCFCOA annual report indicated that in parenting proceedings, 70% alleged a child had been or was at risk of abuse, 80% alleged a party had experienced Family Violence, 74 % of children had been exposed to Family Violence, 53% alleged through substance misuse by party children had caused or was at risk of causing harm to children and 58% of matters allege mental health issues of a party had caused or placed a child at risk of harm. Those figures reveal the significant overlap between child protection and family violence, both of which are under the responsibility of state and territory governments. Since 2020, a further key initiative has been the co-location of child protection and police officials amongst a number of registries of the FCFOCA. The ability to access in a time-effective way details of family violence or child protection orders, weapons licenses, criminal convictions or proceedings is key to managing issues across the state and federal divide. It ensures a more cohesive and effective response to family law, family violence and child protection systems.

Keywords: child protection, family violence, parenting, risk screening, triage.

Procedia PDF Downloads 77
944 Modeling of Geotechnical Data Using GIS and Matlab for Eastern Ahmedabad City, Gujarat

Authors: Rahul Patel, S. P. Dave, M. V Shah

Abstract:

Ahmedabad is a rapidly growing city in western India that is experiencing significant urbanization and industrialization. With projections indicating that it will become a metropolitan city in the near future, various construction activities are taking place, making soil testing a crucial requirement before construction can commence. To achieve this, construction companies and contractors need to periodically conduct soil testing. This study focuses on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical Geo-database involves three essential steps. Firstly, borehole data is collected from reputable sources. Secondly, the accuracy and redundancy of the data are verified. Finally, the geotechnical information is standardized and organized for integration into the database. Once the Geo-database is complete, it is integrated with GIS. This integration allows users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. The GIS map generated by this study enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This approach highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers. The information generated by this study can be utilized by engineers to make informed decisions during construction activities. For instance, they can use the data to optimize foundation designs and improve site selection. In conclusion, the rapid growth experienced by Ahmedabad requires extensive construction activities, necessitating soil testing. This study focused on the process of creating a comprehensive geotechnical database integrated with GIS. The database was developed by collecting borehole data from reputable sources, verifying its accuracy and redundancy, and organizing the information for integration. The GIS map generated by this study is an efficient solution that offers greater accuracy and generates valuable information that can be used as input for correlation analysis. It also serves as a decision support tool for geotechnical engineers, allowing them to make informed decisions during construction activities.

Keywords: arcGIS, borehole data, geographic information system (GIS), geo-database, interpolation, SPT N-value, soil classification, φ-value, bearing capacity

Procedia PDF Downloads 68
943 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs

Authors: André Augusto Ceballos Melo

Abstract:

Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.

Keywords: stable diffusion, neural interface, smart prosthetic, augmenting

Procedia PDF Downloads 101
942 If the Architecture Is in Harmony With Its Surrounding, It Reconnects People With Nature

Authors: Aboubakr Mashali

Abstract:

Context: The paper focuses on the relationship between architecture and nature, emphasizing the importance of incorporating natural elements in design to reconnect individuals with the natural environment. It highlights the positive impact of a harmonious architecture on people's well-being and the environment, as well as the concept of sustainable architecture. Research aim: The aim of this research is to showcase how nature can be integrated into architectural designs, ultimately reestablishing a connection between humans and the natural world. Methodology: The research employs an in-depth approach, delving into the subject matter through extensive research and the analysis of case studies. These case studies provide practical examples and insights into successful architectural designs that have effectively incorporated nature. Findings: The findings suggest that when architecture and nature coexist harmoniously, it creates a positive atmosphere and enhances people's wellbeing. The use of materials obtained from nature in their raw or minimally refined form, such as wood, clay, stone, and bamboo, contributes to a natural atmosphere within the built environment. Additionally, a color palette inspired by nature, consisting of earthy tones, green, brown, and rusty shades, further enhances the harmonious relationship between individuals and their surroundings. The paper also discusses the concept of sustainable architecture, where materials used are renewable, and energy consumption is minimal. It acknowledges the efforts of organizations such as the US Green Building Council in promoting sustainable design practices. Theoretical importance: This research contributes to the understanding of the relationship between architecture and nature and highlights the importance of incorporating natural elements into design. It emphasizes the potential of naturefriendly architecture to create greener, resilient, and sustainable cities. Data collection and analysis procedures: The researcher gathered data through comprehensive research, examining existing literature, and studying relevant case studies. The analysis involved studying the successful implementation of nature in architectural design and its impact on individuals and the environment. Question addressed: The research addresses the question of how nature can be incorporated into architectural designs to reconnect humans with the nature. Conclusion: In conclusion, this research highlights the significance of architecture being in harmony with its surrounding, which in turn should be in harmony with nature. By incorporating nature in architectural designs, individuals can rediscover their connection with nature and experience its positive impact on their well-being. The use of natural materials and a color palette inspired by nature further enhances this relationship. Additionally, embracing sustainable design practices contributes to the creation of greener and more resilient cities. This research underscores the importance of integrating nature-friendly architecture to foster a healthier and more sustainable future.

Keywords: nature, architecture, reconnecting, greencities, sustainable, openspaces, landscape

Procedia PDF Downloads 73
941 Illegal Anthropogenic Activity Drives Large Mammal Population Declines in an African Protected Area

Authors: Oluseun A. Akinsorotan, Louise K. Gentle, Md. Mofakkarul Islam, Richard W. Yarnell

Abstract:

High levels of anthropogenic activity such as habitat destruction, poaching and encroachment into natural habitat have resulted in significant global wildlife declines. In order to protect wildlife, many protected areas such as national parks have been created. However, it is argued that many protected areas are only protected in name and are often exposed to continued, and often illegal, anthropogenic pressure. In West African protected areas, declines of large mammals have been documented between 1962 and 2008. This study aimed to produce occupancy estimates of the remaining large mammal fauna in the third largest National Park in Nigeria, Old Oyo, and to compare the estimates with historic estimates while also attempting to quantify levels of illegal anthropogenic activity using a multi-disciplinary approach. Large mammal populations and levels of illegal anthropogenic activity were assessed using empirical field data (camera trapping and transect surveys) in combination with data from questionnaires completed by local villagers and park rangers. Four of the historically recorded species in the park, lion (Panthera leo), hunting dog (Lycaon pictus), elephant (Loxodonta africana) and buffalo (Syncerus caffer) were not detected during field studies nor were they reported by respondents. In addition, occupancy estimates of hunters and illegal grazers were higher than the majority of large mammal species inside the park. This finding was reinforced by responses from the villagers and rangers who’s perception was that large mammal densities in the park were declining, and that a large proportion of the local people were entering the park to hunt wild animals and graze their domestic livestock. Our findings also suggest that widespread poverty and a lack of alternative livelihood opportunities, culture of consuming bushmeat, lack of education and awareness of the value of protected areas, and weak law enforcement are some of the reasons for the illegal activity. Law enforcement authorities were often constrained by insufficient on-site personnel and a lack of modern equipment and infrastructure to deter illegal activities. We conclude that there is a need to address the issue of illegal hunting and livestock grazing, via provision of alternative livelihoods, in combination with community outreach programmes that aim to improve conservation education and awareness and develop the capacity of the conservation authorities in order to achieve conservation goals. Our findings have implications for the conservation management of all protected areas that are available for exploitation by local communities.

Keywords: camera trapping, conservation, extirpation, illegal grazing, large mammals, national park, occupancy estimates, poaching

Procedia PDF Downloads 295