Search results for: total load approach
936 Self-Supervised Learning for Hate-Speech Identification
Authors: Shrabani Ghosh
Abstract:
Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.Keywords: attention learning, language model, offensive language detection, self-supervised learning
Procedia PDF Downloads 106935 From Clients to Colleagues: Supporting the Professional Development of Survivor Social Work Students
Authors: Stephanie Jo Marchese
Abstract:
This oral presentation is a reflective piece regarding current social work teaching methods that value and devalue the lived experiences of survivor students. This presentation grounds the term ‘survivor’ in feminist frameworks. A survivor-defined approach to feminist advocacy assumes an individual’s agency, considers each case and needs independent of generalizations, and provides resources and support to empower victims. Feminist ideologies are ripe arenas to update and influence the rapport-building schools of social work have with these students. Survivor-based frameworks are rooted in nuanced understandings of intersectional realities, staunchly combat both conscious and unconscious deficit lenses wielded against victims, elevate lived experiences to the realm of experiential expertise, and offer alternatives to traditional power structures and knowledge exchanges. Actively importing a survivor framework into the methodology of social work teaching breaks open barriers many survivor students have faced in institutional settings, this author included. The profession of social work is at an important crux of change, both in the United States and globally. The United States is currently undergoing a radical change in its citizenry and outlier communities have taken to the streets again in opposition to their othered-ness. New waves of students are entering this field, emboldened by their survival of personal and systemic oppressions- heavily influenced by third-wave feminism, critical race theory, queer theory, among other post-structuralist ideologies. Traditional models of sociological and psychological studies are actively being challenged. The profession of social work was not founded on the diagnosis of disorders but rather a grassroots-level activism that heralded and demanded resources for oppressed communities. Institutional and classroom acceptance and celebration of survivor narratives can catapult the resurgence of these values needed in the profession’s service-delivery models and put social workers back in the driver's seat of social change (a combined advocacy and policy perspective), moving away from outsider-based intervention models. Survivor students should be viewed as agents of change, not solely former victims and clients. The ideas of this presentation proposal are supported through various qualitative interviews, as well as reviews of ‘best practices’ in the field of education that incorporate feminist methods of inclusion and empowerment. Curriculum and policy recommendations are also offered.Keywords: deficit lens bias, empowerment theory, feminist praxis, inclusive teaching models, strengths-based approaches, social work teaching methods
Procedia PDF Downloads 289934 Modeling and Analysis of Drilling Operation in Shale Reservoirs with Introduction of an Optimization Approach
Authors: Sina Kazemi, Farshid Torabi, Todd Peterson
Abstract:
Drilling in shale formations is frequently time-consuming, challenging, and fraught with mechanical failures such as stuck pipes or hole packing off when the cutting removal rate is not sufficient to clean the bottom hole. Crossing the heavy oil shale and sand reservoirs with active shale and microfractures is generally associated with severe fluid losses causing a reduction in the rate of the cuttings removal. These circumstances compromise a well’s integrity and result in a lower rate of penetration (ROP). This study presents collective results of field studies and theoretical analysis conducted on data from South Pars and North Dome in an Iran-Qatar offshore field. Solutions to complications related to drilling in shale formations are proposed through systemically analyzing and applying modeling techniques to select field mud logging data. Field data measurements during actual drilling operations indicate that in a shale formation where the return flow of polymer mud was almost lost in the upper dolomite layer, the performance of hole cleaning and ROP progressively change when higher string rotations are initiated. Likewise, it was observed that this effect minimized the force of rotational torque and improved well integrity in the subsequent casing running. Given similar geologic conditions and drilling operations in reservoirs targeting shale as the producing zone like the Bakken formation within the Williston Basin and Lloydminster, Saskatchewan, a drill bench dynamic modeling simulation was used to simulate borehole cleaning efficiency and mud optimization. The results obtained by altering RPM (string revolution per minute) at the same pump rate and optimized mud properties exhibit a positive correlation with field measurements. The field investigation and developed model in this report show that increasing the speed of string revolution as far as geomechanics and drilling bit conditions permit can minimize the risk of mechanically stuck pipes while reaching a higher than expected ROP in shale formations. Data obtained from modeling and field data analysis, optimized drilling parameters, and hole cleaning procedures are suggested for minimizing the risk of a hole packing off and enhancing well integrity in shale reservoirs. Whereas optimization of ROP at a lower pump rate maintains the wellbore stability, it saves time for the operator while reducing carbon emissions and fatigue of mud motors and power supply engines.Keywords: ROP, circulating density, drilling parameters, return flow, shale reservoir, well integrity
Procedia PDF Downloads 86933 Collaboration between Dietician and Occupational Therapist, Promotes Independent Functional Eating in Tube Weaning Process of Mechanical Ventilated Patients
Authors: Inbal Zuriely, Yonit Weiss, Hilla Zaharoni, Hadas Lewkowicz, Tatiana Vander, Tarif Bader
Abstract:
early active movement, along with adjusting optimal nutrition, prevents aggravation of muscle degeneracy and functional decline. Eating is a basic activity of daily life, which reflects the patient's independence. When eating and feeding are experienced successfully, they lead to a sense of pleasure and satisfaction. However, when they are experienced as a difficulty, they might evoke feelings of helplessness and frustration. This stresses the essential process of gradual weaning off the enteral feeding tube. the work describes the collaboration of a dietitian, determining the nutritional needs of patients undergoing enteral tube weaning as part of the rehabilitation process, with the suited treatment of an occupational therapist. Occupational therapy intervention regarding eating capabilities focuses on improving the required motor and cognitive components, along with environmental adjustments and aids, imparting eating strategies and training to patients and their families. The project was conducted in the long-term, ventilated patients’ department at the Herzfeld Rehabilitation Geriatric Medical Center on patients undergoing enteral tube weaning with the staff’s assistance. Establishing continuous collaboration between the dietician and the occupational therapist, starting from the beginning of the feeding-tube weaning process: 1.The dietician updates the occupational therapist about the start of the process and the approved diet. 2.The occupational therapist performs cognitive, motor, and functional assessments and treatments regarding the patient’s eating capabilities and recommends the required adjustments for independent eating according to the FIM (Functional Independence Measure) scale. 3.The occupational therapist closely follows up on the patient’s degree of independence in eating and provides a repeated update to the dietician. 4.The dietician accordingly guides the ward staff on whether and how to feed the patient or allow independent eating. The project aimed to promote patients toward independent feeding, which leads to a sense of empowerment, enjoyment of the eating experience, and progress of functional ability, along with performing active movements that will motivate mobilization. From the beginning of 2022, 26 patients participated in the project. 79% of all patients who started the weaning process from tube feeding achieved different levels of independence in feeding (independence levels ranged from supervision (FIM-5) to complete independence (FIM-7). The integration of occupational therapy and dietary treatment is based on a patient-centered approach while considering the patient’s personal needs, preferences, and goals. This interdisciplinary partnership is essential for meeting the complex needs of prolonged mechanically ventilated patients and promotes independent functioning and quality of life.Keywords: dietary, mechanical ventilation, occupational therapy, tube feeding weaning
Procedia PDF Downloads 78932 Crop Breeding for Low Input Farming Systems and Appropriate Breeding Strategies
Authors: Baye Berihun Getahun, Mulugeta Atnaf Tiruneh, Richard G. F. Visser
Abstract:
Resource-poor farmers practice low-input farming systems, and yet, most breeding programs give less attention to this huge farming system, which serves as a source of food and income for several people in developing countries. The high-input conventional breeding system appears to have failed to adequately meet the needs and requirements of 'difficult' environments operating under this system. Moreover, the unavailability of resources for crop production is getting for their peaks, the environment is maltreated by excessive use of agrochemicals, crop productivity reaches its plateau stage, particularly in the developed nations, the world population is increasing, and food shortage sustained to persist for poor societies. In various parts of the world, genetic gain at the farmers' level remains low which could be associated with low adoption of crop varieties, which have been developed under high input systems. Farmers usually use their local varieties and apply minimum inputs as a risk-avoiding and cost-minimizing strategy. This evidence indicates that the conventional high-input plant breeding system has failed to feed the world population, and the world is moving further away from the United Nations' goals of ending hunger, food insecurity, and malnutrition. In this review, we discussed the rationality of focused breeding programs for low-input farming systems and, the technical aspect of crop breeding that accommodates future food needs and its significance for developing countries in the decreasing scenario of resources required for crop production. To this end, the application of exotic introgression techniques like polyploidization, pan-genomics, comparative genomics, and De novo domestication as a pre-breeding technique has been discussed in the review to exploit the untapped genetic diversity of the crop wild relatives (CWRs). Desired recombinants developed at the pre-breeding stage are exploited through appropriate breeding approaches such as evolutionary plant breeding (EPB), rhizosphere-related traits breeding, and participatory plant breeding approaches. Populations advanced through evolutionary breeding like composite cross populations (CCPs) and rhizosphere-associated traits breeding approach that provides opportunities for improving abiotic and biotic soil stress, nutrient acquisition capacity, and crop microbe interaction in improved varieties have been reviewed. Overall, we conclude that low input farming system is a huge farming system that requires distinctive breeding approaches, and the exotic pre-breeding introgression techniques and the appropriate breeding approaches which deploy the skills and knowledge of both breeders and farmers are vital to develop heterogeneous landrace populations, which are effective for farmers practicing low input farming across the world.Keywords: low input farming, evolutionary plant breeding, composite cross population, participatory plant breeding
Procedia PDF Downloads 52931 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series
Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold
Abstract:
To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network
Procedia PDF Downloads 139930 Poland and the Dawn of the Right to Education and Development: Moving Back in Time
Authors: Magdalena Zabrocka
Abstract:
The terror of women throughout the governance of the current populist ruling party in Poland, PiS, has been a subject of a heated debate alongside the issues of minorities’ rights, the rule of law, and democracy in the country. The challenges that women and other vulnerable groups are currently facing, however, come down to more than just a lack of comprehensive equality laws, severely limited reproductive rights, hateful slogans, and messages propagated by the central authority and its sympathisers, or a common disregard for women’s fundamental rights. Many sources and media reports are available only in Polish, while international rapporteurs fail to acknowledge the whole picture of the tragedy happening in the country and the variety of factors affecting it. Starting with the authorities’ and Polish catholic church’s propaganda concerning CEDAW and the Istanbul Convention Action against Violence against Women and Domestic Violence by spreading strategic disinformation that it codifies ‘gender ideology’ and ‘anti-Christian values’ in order to convince the electorate that the legal instruments should be ‘abandoned’. Alongside severely restricted abortion rights, bullying medical professionals helping women exercise their reproductive rights, violating women’s privacy by introducing a mandatory registry of pregnancies (so that one’s pregnancy or its ‘loss’ can be tracked and traced), restricting access to the ‘day after pill’ and real sex education at schools (most schools have a subject of ‘knowledge of living in a family’), introducing prison punishment for teachers accused of spreading ‘sex education’, and many other, the current tyrant government, has now decided to target the youngest with its misinformation and indoctrination, via strategically designed textbooks and curriculum. Biology books have seen a big restriction on the size of the chapters devoted to evolution, reproductive system, and sexual health. Approved religion books (which are taught 2-3 times a week as compared to 1 a week sciences) now cover false information about Darwin’s theory and arguments ‘against it’. Most recently, however, the public spoke up against the absurd messages contained in the politically rewritten history books, where the material about some figures not liked by the governing party has already been manipulated. In the recently approved changes to the history textbook, one can find a variety of strongly biased and politically-charged views representative of the conservatives in the states, most notably, equating the ‘gender ideology’ and feminism with Nazism. Thus, this work, by employing a human rights approach, would focus on the right to education and development as well as the considerate obstacles to access to scientific information by the youth.Keywords: Poland, right to education, right to development, authoritarianism, access to information
Procedia PDF Downloads 105929 Study of the Impact of Quality Management System on Chinese Baby Dairy Product Industries
Authors: Qingxin Chen, Liben Jiang, Andrew Smith, Karim Hadjri
Abstract:
Since 2007, the Chinese food industry has undergone serious food contamination in the baby dairy industry, especially milk powder contamination. One of the milk powder products was found to contain melamine and a significant number (294,000) of babies were affected by kidney stones. Due to growing concerns among consumers about food safety and protection, and high pressure from central government, companies must take radical action to ensure food quality protection through the use of an appropriate quality management system. Previously, though researchers have investigated the health and safety aspects of food industries and products, quality issues concerning food products in China have been largely over-looked. Issues associated with baby dairy products and their quality issues have not been discussed in depth. This paper investigates the impact of quality management systems on the Chinese baby dairy product industry. A literature review was carried out to analyse the use of quality management systems within the Chinese milk power market. Moreover, quality concepts, relevant standards, laws, regulations and special issues (such as Melamine, Flavacin M1 contamination) have been analysed in detail. A qualitative research approach is employed, whereby preliminary analysis was conducted by interview, and data analysis based on interview responses from four selected Chinese baby dairy product companies was carried out. Through the analysis of literature review and data findings, it has been revealed that for quality management system that has been designed by many practitioners, many theories, models, conceptualisation, and systems are present. These standards and procedures should be followed in order to provide quality products to consumers, but the implementation is lacking in the Chinese baby dairy industry. Quality management systems have been applied by the selected companies but the implementation still needs improvement. For instance, the companies have to take measures to improve their processes and procedures with relevant standards. The government need to make more interventions and take a greater supervisory role in the production process. In general, this research presents implications for the regulatory bodies, Chinese Government and dairy food companies. There are food safety laws prevalent in China but they have not been widely practiced by companies. Regulatory bodies must take a greater role in ensuring compliance with laws and regulations. The Chinese government must also play a special role in urging companies to implement relevant quality control processes. The baby dairy companies not only have to accept the interventions from the regulatory bodies and government, they also need to ensure that production, storage, distribution and other processes will follow the relevant rules and standards.Keywords: baby dairy product, food quality, milk powder contamination, quality management system
Procedia PDF Downloads 473928 Life at the Fence: Lived Experiences of Navigating Cultural and Social Complexities among South Sudanese Refugees in Australia
Authors: Sabitra Kaphle, Rebecca Fanany, Jenny Kelly
Abstract:
Australia welcomes significant numbers of humanitarian arrivals every year with the commitment to provide equal opportunities and the resources required for integration into the new society. Over the last two decades, more than 24,000 South Sudanese people have come to call Australia home. Most of these refugees experienced several challenges whilesettlinginto the new social structures and service systems in Australia. The aim of the research is to explore the factors influencing social and cultural integration of South Sudanese refugees who have settled in Australia. Methodology: This studyused a phenomenological approach based on in-depth interviews designed to elicit the lived experiences of South Sudanese refugees settled in Australia. It applied the principles of narrative ethnography, allowing participants an opportunity to speak about themselves and their experiences of social and cultural integration-using their own words. Twenty-six participants were recruited to the study. Participants were long-term residents (over 10 years of settlement experience)who self-identified as refugees from South Sudan. Participants were given an opportunity to speak in the language of their choice, and interviews were conducted by a bilingual interviewer in their preferred language, time, and location. Interviews were recorded and transcribed verbatim and translated to Englishfor thematic analysis. Findings: Participants’ experiences portray the complexities of integrating into a new society due tothe daily challenges that South Sudaneserefugees face. Themes emerged from narrativesindicated that South Sudanese refugees express a high level of association with a Sudanese identity while demonstrating a significant level of integration into the Australian society. Despite this identity dilemma, these refugees show a high level of consensus about the experiencesof living in Australia that is closely associated with a group identity. In the process of maintaining identity andsocial affiliation, there are significant inter-generational cultural conflicts that participants experience in adapting to Australian society. It has been elucidated that identityconflict often emerges centeringon what constitutes authentic cultural practice as well as who is entitled to claim to be a member of the South Sudanese culture. Conclusions: Results of this study suggest that the cultural identity and social affiliations of South Sudanese refugees settling into Australian society are complex and multifaceted. While there are positive elements of theirintegration into the new society, inter-generational conflictsand identity confusion require further investigation to understand the context that will assist refugees to integrate more successfully into their new society. Given the length of stay of these refugees in Australia, government and settlement agencies may benefit from developing appropriate resources and process that are adaptive to the social and cultural context in which newly arrived refugees will live.Keywords: cultural integration, inter-generational conflict, lived experiences, refugees, South sudanese
Procedia PDF Downloads 115927 Climate Change Threats to UNESCO-Designated World Heritage Sites: Empirical Evidence from Konso Cultural Landscape, Ethiopia
Authors: Yimer Mohammed Assen, Abiyot Legesse Kura, Engida Esyas Dube, Asebe Regassa Debelo, Girma Kelboro Mensuro, Lete Bekele Gure
Abstract:
Climate change has posed severe threats to many cultural landscapes of UNESCO world heritage sites recently. The UNESCO State of Conservation (SOC) reports categorized flooding, temperature increment, and drought as threats to cultural landscapes. This study aimed to examine variations and trends of rainfall and temperature extreme events and their threats to the UNESCO-designated Konso Cultural Landscape in southern Ethiopia. The study used dense merged satellite-gauge station rainfall data (1981-2020) with spatial resolution of 4km by 4km and observed maximum and minimum temperature data (1987-2020). Qualitative data were also gathered from cultural leaders, local administrators, and religious leaders using structured interview checklists. The spatial patterns, coefficient of variation, standardized anomalies, trends, and magnitude of change of rainfall and temperature extreme events both at annual and seasonal levels were computed using the Mann-Kendall trend test and Sen’s slope estimator under the CDT package. The standard precipitation index (SPI) was also used to calculate drought severity, frequency, and trend maps. The data gathered from key informant interviews and focus group discussions were coded and analyzed thematically to complement statistical findings. Thematic areas that explain the impacts of extreme events on the cultural landscape were chosen for coding. The thematic analysis was conducted using Nvivo software. The findings revealed that rainfall was highly variable and unpredictable, resulting in extreme drought and flood. There were significant (P<0.05) increasing trends of heavy rainfall (R10mm and R20mm) and the total amount of rain on wet days (PRCPTOT), which might have resulted in flooding. The study also confirmed that absolute temperature extreme indices (TXx, TXn, and TNx) and the percentile-based temperature extreme indices (TX90p, TN90p, TX10p, and TN10P) showed significant (P<0.05) increasing trends which are signals for warming of the study area. The results revealed that the frequency as well as the severity of drought at 3-months (katana/hageya seasons) was more pronounced than the 12-months (annual) time scale. The highest number of droughts in 100 years is projected at a 3-months timescale across the study area. The findings also showed that frequent drought has led to loss of grasses which are used for making traditional individual houses and multipurpose communal houses (pafta), food insecurity, migration, loss of biodiversity, and commodification of stones from terrace. On the other hand, the increasing trends of rainfall extreme indices resulted in destruction of terraces, soil erosion, loss of life and damage of properties. The study shows that a persistent decline in farmland productivity, due to erratic and extreme rainfall and frequent drought occurrences, forced the local people to participate in non-farm activities and retreat from daily preservation and management of their landscape. Overall, the increasing rainfall and temperature extremes coupled with prevalence of drought are thought to have an impact on the sustainability of cultural landscape through disrupting the ecosystem services and livelihood of the community. Therefore, more localized adaptation and mitigation strategies to the changing climate are needed to maintain the sustainability of Konso cultural landscapes as a global cultural treasure and to strengthen the resilience of smallholder farmers.Keywords: adaptation, cultural landscape, drought, extremes indices
Procedia PDF Downloads 26926 Family Carers' Experiences in Striving for Medical Care and Finding Their Solutions for Family Members with Mental Illnesses
Authors: Yu-Yu Wang, Shih-Hua Hsieh, Ru-Shian Hsieh
Abstract:
Wishes and choices being respected, and the right to be supported rather than coerced, have been internationally recognized as the human rights of persons with mental illness. In Taiwan, ‘coerced hospitalization’ has become difficult since the revision of the mental health legislation in 2007. Despite trend towards human rights, the real problem families face when their family members are in mental health crisis is the lack of alternative services. This study aims to explore: 1) When is hospitalization seen as the only solution by family members? 2) What are the barriers for arranging hospitalization, and how are they managed? 3) What have family carers learned, in their experiences of caring for their family members with mental illness? To answer these questions, qualitative approach was adopted, and focus group interviews were taken to collect data. This study includes 24 family carers. The main findings of this research include: First, hospital is the last resort for carers in helplessness. Family carers tend to do everything they could to provide care at home for their family members with mental illness. Carers seek hospitalization only when a patient’s behavior is too violent, weird, and/or abnormal, and beyond their ability to manage. Hospitalization, nevertheless, is never an easy choice. Obstacles emanate from the attitudes of the medical doctors, the restricted areas of ambulance service, and insufficient information from the carers’ part. On the other hand, with some professionals’ proactive assistance, access to medical care while in crisis becomes possible. Some family carers obtained help from the medical doctor, nurse, therapist and social workers. Some experienced good help from policemen, taxi drivers, and security guards at the hospital. The difficulty in accessing medical care prompts carers to work harder on assisting their family members with mental illness to stay in stable states. Carers found different ways of helping the ‘person’ to get along with the ‘illness’ and have better quality of life. Taking back ‘the right to control’ in utilizing medication, from passiveness to negotiating with medical doctors and seeking alternative therapies, are seen in many carers’ efforts. Besides, trying to maintain regular activities in daily life and play normal family roles are also experienced as important. Furthermore, talking with the patient as a person is also important. The authors conclude that in order to protect the human rights of persons with mental illness, it is crucial to make the medical care system more flexible and to make the services more humane: sufficient information should be provided and communicated, and efforts should be made to maintain the person’s social roles and to support the family.Keywords: family carers, independent living, mental health crisis, persons with mental illness
Procedia PDF Downloads 306925 Spatial Conceptualization in French and Italian Speakers: A Contrastive Approach in the Context of the Linguistic Relativity Theory
Authors: Camilla Simoncelli
Abstract:
The connection between language and cognition has been one of the main interests of linguistics from several years. According to the Sapir-Whorf Linguistic Relativity Theory, the way we perceive reality depends on the language we speak which in turn has a central role in the human cognition. This paper is in line with this research work with the aim of analyzing how language structures reflect on our cognitive abilities even in the description of space, which is generally considered as a human natural and universal domain. The main objective is to identify the differences in the encoding of spatial inclusion relationships in French and Italian speakers to make evidence that a significant variation exists at various levels even in two similar systems. Starting from the constitution a corpora, the first step of the study has been to establish the relevant complex prepositions marking an inclusion relation in French and Italian: au centre de, au cœur de, au milieu de, au sein de, à l'intérieur de and the opposition entre/parmi in French; al centro di, al cuore di, nel mezzo di, in seno a, all'interno di and the fra/tra contrast in Italian. These prepositions had been classified on the base of the type of Noun following them (e.g. mass nouns, concrete nouns, abstract nouns, body-parts noun, etc.) following the Collostructional Analysis of lexemes with the purpose of analyzing the preferred construction of each preposition comparing the relations construed. Comparing the Italian and the French results it has been possible to define the degree of representativeness of each target Noun for the chosen preposition studied. Lexicostatistics and Statistical Association Measures showed the values of attraction or repulsion between lexemes and a given preposition, highlighting which words are over-represented or under-represented in a specific context compared to the expected results. For instance, a Noun as Dibattiti has a negative value for the Italian Al cuore di (-1,91), but it has a strong positive representativeness for the corresponding French Au cœur de (+677,76). The value, positive or negative, is the result of a hypergeometric distribution law which displays the current use of some relevant nouns in relations of spatial inclusion by French and Italian speakers. Differences on the kind of location conceptualization denote syntactic and semantic constraints based on spatial features as well as on linguistic peculiarity, too. The aim of this paper is to demonstrate that the domain of spatial relations is basic to human experience and is linked to universally shared perceptual mechanisms which create mental representations depending on the language use. Therefore, linguistic coding strongly correlates with the way spatial distinctions are conceptualized for non-verbal tasks even in close language systems, like Italian and French.Keywords: cognitive semantics, cross-linguistic variations, locational terms, non-verbal spatial representations
Procedia PDF Downloads 113924 Effect of Varied Climate, Landuse and Human Activities on the Termite (Isoptera: Insecta) Diversity in Three Different Habitats of Shivamogga District, Karnataka, India
Authors: C. M. Kalleshwaraswamy, G. S. Sathisha, A. S. Vidyashree, H. B. Pavithra
Abstract:
Isoptera are an interesting group of social insects with different castes and division of labour. They are primarily wood-feeders, but also feed on a variety of other organic substrates, such as living trees, leaf litter, soil, lichens and animal faeces. The number of species and their biomass are especially large in tropics. In natural ecosystems, they perform a beneficial role in nutrient cycles by accelerating decomposition. The magnitude and dimension of ecological role played by termites is a function of their diversity, population density, and biomass. Termite assemblage composition has a strong response to habitat disturbance and may be indicative of quantitative changes in the decomposition process. Many previous studies in Western Ghat region of India suggest increased anthropogenic activities that adversely affect the soil macrofauna and diversity. Shivamogga district provides a good opportunity to study the effect of topography, cropping pattern, human disturbance on the termite fauna, thereby acquiring accurate baseline information for conservation decision making. The district has 3 distinct agro-ecological areas such as maidan area, semi-malnad and Western Ghat region. Thus, the district provides a unique opportunity to study the effect of varied climate and anthropogenic disturbance on the termite diversity. The standard protocol of belt transects method developed by Eggleton et al. (1997) was used for sampling termites. Sampling was done at monthly interval from September-2014 to August-2015 in Western Ghats, semi-malnad and maidan habitats. The transect was 100m long and 2m wide and divided into 20 contiguous sections, each 5 x 2m in each habitat. Within each section, all the probable microhabitats of termites were searched, which include dead logs, fallen tree, branch, sticks, leaf litter, vegetation etc.,. All the castes collected were labelled, preserved in 80% alcohol, counted and identified to species level. The number of encounters of a species in the transect was used as an indicator of relative abundance of species. The species diversity, species richness, density were compared in three different habitats such as Western Ghats, semi-malnad and maidan region. The study indicated differences in the species composition in the three different habitats. A total of 15 species were recorded which belonging to four sub family and five genera in three habitats. Eleven species viz., Odontotermes obesus, O. feae, O. anamallensis, O. bellahunisensis, O. adampurensis, O. boveni, Microcerotermes fletcheri, M. pakistanicus, Nasutitermes anamalaiensis, N. indicola, N. krishna were recorded in Western Ghat region. Similarly, 11 species viz., Odontotermes obesus, O. feae, O. anamallensis, O. bellahunisensis, O. hornii, O. bhagwathi, Microtermes obesi, Microcerotermes fletcheri, M. pakistanicus, Nasutitermes indicola and Pericapritermes sp. were recorded in semi-malnad habitat. However, only four species viz., O. obesus, O. feae, Microtemes obesi and Pericapritermes sp. species were recorded in maidan area. Shannon’s wiener diversity index (H) showed that Western Ghats had more species dominance (1.56) followed by semi- malnad (1.36) and lowest in maidan (0.89) habitats. Highest value of simpson’s index (D) was observed in Western Ghats habitat (0.70) with more diverse species followed by semi-malnad (0.58) and lowest in maidan (0.53). Similarly, evenness was highest (0.65) in Western Ghats followed by maidan (0.64) and least in semi-malnad habitat (0.54). Menhinick’s index (Dmn) value was ranging from 0.03 to 0.06 in different habitats in the study area. Highest index was observed in Western Ghats (0.06) followed by semi-malnad (0.05) and lowest in maidan (0.03). The study conclusively demonstrated that Western Ghat had highest species diversity compared to semi-malnad and maidan habitat indicating these two habitats are continuously subjected to anthropogenic disturbances. Efforts are needed to conserve the uncommon species which otherwise may become extinct due to human activities.Keywords: anthropogenic disturbance, isoptera, termite species diversity, Western ghats
Procedia PDF Downloads 269923 Density Functional Theory Study of the Surface Interactions between Sodium Carbonate Aerosols and Fission Products
Authors: Ankita Jadon, Sidi Souvi, Nathalie Girault, Denis Petitprez
Abstract:
The interaction of fission products (FP) with sodium carbonate (Na₂CO₃) aerosols is of a high safety concern because of their potential role in the radiological source term mitigation by FP trapping. In a sodium-cooled fast nuclear reactor (SFR) experiencing a severe accident, sodium (Na) aerosols can be formed after the ejection of the liquid Na coolant inside the containment. The surface interactions between these aerosols and different FP species have been investigated using ab-initio, density functional theory (DFT) calculations using Vienna ab-initio simulation package (VASP). In addition, an improved thermodynamic model has been proposed to treat DFT-VASP calculated energies to extrapolate them to temperatures and pressures of interest in our study. A combined experimental and theoretical chemistry study has been carried out to have both atomistic and macroscopic understanding of the chemical processes; the theoretical chemistry part of this approach is presented in this paper. The Perdew, Burke, and Ernzerhof functional were applied in combination with Grimme’s van der Waals correction to compute exchange-correlational energy at 0 K. Seven different surface cleavages were studied of Ƴ-Na₂CO₃ phase (stable at 603.15 K), it was found that for defect-free surfaces, the (001) facet is the most stable. Furthermore, calculations were performed to study surface defects and reconstructions on the ideal surface. All the studied surface defects were found to be less stable than the ideal surface. More than one adsorbate-ligand configurations were found to be stable confirming that FP vapors could be trapped on various adsorption sites. The calculated adsorption energies (Eads, eV) for the three most stable adsorption sites for I₂ are -1.33, -1.088, and -1.085. Moreover, the adsorption of the first molecule of I₂ changes the surface in a way which would favor stronger adsorption of a second molecule of I2 (Eads, eV = -1.261). For HI adsorption, the most favored reactions have the following Eads (eV) -1.982, -1.790, -1.683 implying that HI would be more reactive than I₂. In addition to FP species, adsorption of H₂O was also studied as the hydrated surface can have different reactivity than the bare surface. One thermodynamically favored site for H₂O adsorption was found with an Eads, eV of -0.754. Finally, the calculations of hydrated surfaces of Na₂CO₃ show that a layer of water adsorbed on the surface significantly reduces its affinity for iodine (Eads, eV = -1.066). According to the thermodynamic model built, the required partial pressure at 373 K to have adsorption of the first layer of iodine is 4.57×10⁻⁴ bar. The second layer will be adsorbed at partial pressures higher than 8.56×10⁻⁶ bar; a layer of water on the surface will increase these pressure almost ten folds to 3.71×10⁻³ bar. The surface interacts with elemental Cs with an Eads (eV) of -1.60, while interacts even strongly with CsI with an Eads (eV) of -2.39. More results on the interactions between Na₂CO₃ (001) and cesium-based FP will also be presented in this paper.Keywords: iodine uptake, sodium carbonate surface, sodium-cooled fast nuclear reactor, DFT calculations, fission products
Procedia PDF Downloads 151922 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model
Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle
Abstract:
In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model
Procedia PDF Downloads 103921 Construction and Analysis of Tamazight (Berber) Text Corpus
Authors: Zayd Khayi
Abstract:
This paper deals with the construction and analysis of the Tamazight text corpus. The grammatical structure of the Tamazight remains poorly understood, and a lack of comparative grammar leads to linguistic issues. In order to fill this gap, even though it is small, by constructed the diachronic corpus of the Tamazight language, and elaborated the program tool. In addition, this work is devoted to constructing that tool to analyze the different aspects of the Tamazight, with its different dialects used in the north of Africa, specifically in Morocco. It also focused on three Moroccan dialects: Tamazight, Tarifiyt, and Tachlhit. The Latin version was good choice because of the many sources it has. The corpus is based on the grammatical parameters and features of that language. The text collection contains more than 500 texts that cover a long historical period. It is free, and it will be useful for further investigations. The texts were transformed into an XML-format standardization goal. The corpus counts more than 200,000 words. Based on the linguistic rules and statistical methods, the original user interface and software prototype were developed by combining the technologies of web design and Python. The corpus presents more details and features about how this corpus provides users with the ability to distinguish easily between feminine/masculine nouns and verbs. The interface used has three languages: TMZ, FR, and EN. Selected texts were not initially categorized. This work was done in a manual way. Within corpus linguistics, there is currently no commonly accepted approach to the classification of texts. Texts are distinguished into ten categories. To describe and represent the texts in the corpus, we elaborated the XML structure according to the TEI recommendations. Using the search function may provide us with the types of words we would search for, like feminine/masculine nouns and verbs. Nouns are divided into two parts. The gender in the corpus has two forms. The neutral form of the word corresponds to masculine, while feminine is indicated by a double t-t affix (the prefix t- and the suffix -t), ex: Tarbat (girl), Tamtut (woman), Taxamt (tent), and Tislit (bride). However, there are some words whose feminine form contains only the prefix t- and the suffix –a, ex: Tasa (liver), tawja (family), and tarwa (progenitors). Generally, Tamazight masculine words have prefixes that distinguish them from other words. For instance, 'a', 'u', 'i', ex: Asklu (tree), udi (cheese), ighef (head). Verbs in the corpus are for the first person singular and plural that have suffixes 'agh','ex', 'egh', ex: 'ghrex' (I study), 'fegh' (I go out), 'nadagh' (I call). The program tool permits the following characteristics of this corpus: list of all tokens; list of unique words; lexical diversity; realize different grammatical requests. To conclude, this corpus has only focused on a small group of parts of speech in Tamazight language verbs, nouns. Work is still on the adjectives, prounouns, adverbs and others.Keywords: Tamazight (Berber) language, corpus linguistic, grammar rules, statistical methods
Procedia PDF Downloads 66920 From Design, Experience and Play Framework to Common Design Thinking Tools: Using Serious Modern Board Games
Authors: Micael Sousa
Abstract:
Board games (BGs) are thriving as new designs emerge from the hobby community to greater audiences all around the world. Although digital games are gathering most of the attention in game studies and serious games research fields, the post-digital movement helps to explain why in the world dominated by digital technologies, the analog experiences are still unique and irreplaceable to users, allowing innovation in new hybrid environments. The BG’s new designs are part of these post-digital and hybrid movements because they result from the use of powerful digital tools that enable the production and knowledge sharing about the BGs and their face-to-face unique social experiences. These new BGs, defined as modern by many authors, are providing innovative designs and unique game mechanics that are still not yet fully explored by the main serious games (SG) approaches. Even the most established frameworks settled to address SG, as fun games implemented to achieve predefined goals need more development, especially when considering modern BGs. Despite the many anecdotic perceptions, researchers are only now starting to rediscover BGs and demonstrating their potentials. They are proving that BGs are easy to adapt and to grasp by non-expert players in experimental approaches, with the possibility of easy-going adaptation to players’ profiles and serious objectives even during gameplay. Although there are many design thinking (DT) models and practices, their relations with SG frameworks are also underdeveloped, mostly because this is a new research field, lacking theoretical development and the systematization of the experimental practices. Using BG as case studies promise to help develop these frameworks. Departing from the Design, Experience, and Play (DPE) framework and considering the Common Design Think Tools (CDST), this paper proposes a new experimental framework for the adaptation and development of modern BG design for DT: the Design, Experience, and Play for Think (DPET) experimental framework. This is done through the systematization of the DPE and CDST approaches applied in two case studies, where two different sequences of adapted BG were employed to establish a DT collaborative process. These two sessions occurred with different participants and in different contexts, also using different sequences of games for the same DT approach. The first session took place at the Faculty of Economics at the University of Coimbra in a training session of serious games for project development. The second session took place in the Casa do Impacto through The Great Village Design Jam light. Both sessions had the same duration and were designed to progressively achieve DT goals, using BGs as SGs in a collaborative process. The results from the sessions show that a sequence of BGs, when properly adapted to address the DPET framework, can generate a viable and innovative process of collaborative DT that is productive, fun, and engaging. The DPET proposed framework intents to help establish how new SG solutions could be defined for new goals through flexible DT. Applications in other areas of research and development can also benefit from these findings.Keywords: board games, design thinking, methodology, serious games
Procedia PDF Downloads 112919 Towards the Rapid Synthesis of High-Quality Monolayer Continuous Film of Graphene on High Surface Free Energy Existing Plasma Modified Cu Foil
Authors: Maddumage Don Sandeepa Lakshad Wimalananda, Jae-Kwan Kim, Ji-Myon Lee
Abstract:
Graphene is an extraordinary 2D material that shows superior electrical, optical, and mechanical properties for the applications such as transparent contacts. Further, chemical vapor deposition (CVD) technique facilitates to synthesizing of large-area graphene, including transferability. The abstract is describing the use of high surface free energy (SFE) and nano-scale high-density surface kinks (rough) existing Cu foil for CVD graphene growth, which is an opposite approach to modern use of catalytic surfaces for high-quality graphene growth, but the controllable rough morphological nature opens new era to fast synthesis (less than the 50s with a short annealing process) of graphene as a continuous film over conventional longer process (30 min growth). The experiments were shown that high SFE condition and surface kinks on Cu(100) crystal plane existing Cu catalytic surface facilitated to synthesize graphene with high monolayer and continuous nature because it can influence the adsorption of C species with high concentration and which can be facilitated by faster nucleation and growth of graphene. The fast nucleation and growth are lowering the diffusion of C atoms to Cu-graphene interface, which is resulting in no or negligible formation of bilayer patches. High energy (500W) Ar plasma treatment (inductively Coupled plasma) was facilitated to form rough and high SFE existing (54.92 mJm-2) Cu foil. This surface was used to grow the graphene by using CVD technique at 1000C for 50s. The introduced kink-like high SFE existing point on Cu(100) crystal plane facilitated to faster nucleation of graphene with a high monolayer ratio (I2D/IG is 2.42) compared to another different kind of smooth morphological and low SFE existing Cu surfaces such as Smoother surface, which is prepared by the redeposit of Cu evaporating atoms during the annealing (RRMS is 13.3nm). Even high SFE condition was favorable to synthesize graphene with monolayer and continuous nature; It fails to maintain clean (surface contains amorphous C clusters) and defect-free condition (ID/IG is 0.46) because of high SFE of Cu foil at the graphene growth stage. A post annealing process was used to heal and overcome previously mentioned problems. Different CVD atmospheres such as CH4 and H2 were used, and it was observed that there is a negligible change in graphene nature (number of layers and continuous condition) but it was observed that there is a significant difference in graphene quality because the ID/IG ratio of the graphene was reduced to 0.21 after the post-annealing with H2 gas. Addition to the change of graphene defectiveness the FE-SEM images show there was a reduction of C cluster contamination of the surface. High SFE conditions are favorable to form graphene as a monolayer and continuous film, but it fails to provide defect-free graphene. Further, plasma modified high SFE existing surface can be used to synthesize graphene within 50s, and a post annealing process can be used to reduce the defectiveness.Keywords: chemical vapor deposition, graphene, morphology, plasma, surface free energy
Procedia PDF Downloads 244918 Evaluation of River Meander Geometry Using Uniform Excess Energy Theory and Effects of Climate Change on River Meandering
Authors: Youssef I. Hafez
Abstract:
Since ancient history rivers have been the fostering and favorite place for people and civilizations to live and exist along river banks. However, due to floods and droughts, especially sever conditions due to global warming and climate change, river channels are completely evolving and moving in the lateral direction changing their plan form either through straightening of curved reaches (meander cut-off) or increasing meandering curvature. The lateral shift or shrink of a river channel affects severely the river banks and the flood plain with tremendous impact on the surrounding environment. Therefore, understanding the formation and the continual processes of river channel meandering is of paramount importance. So far, in spite of the huge number of publications about river-meandering, there has not been a satisfactory theory or approach that provides a clear explanation of the formation of river meanders and the mechanics of their associated geometries. In particular two parameters are often needed to describe meander geometry. The first one is a scale parameter such as the meander arc length. The second is a shape parameter such as the maximum angle a meander path makes with the channel mean down path direction. These two parameters, if known, can determine the meander path and geometry as for example when they are incorporated in the well known sine-generated curve. In this study, a uniform excess energy theory is used to illustrate the origin and mechanics of formation of river meandering. This theory advocates that the longitudinal imbalance between the valley and channel slopes (with the former is greater than the second) leads to formation of curved meander channel in order to reduce the excess energy through its expenditure as transverse energy loss. Two relations are developed based on this theory; one for the determination of river channel radius of curvature at the bend apex (shape parameter) and the other for the determination of river channel sinuosity. The sinuosity equation tested very well when applied to existing available field data. In addition, existing model data were used to develop a relation between the meander arc length and the Darcy-Weisback friction factor. Then, the meander wave length was determined from the equations of the arc length and the sinuosity. The developed equation compared well with available field data. Effects of the transverse bed slope and grain size on river channel sinuosity are addressed. In addition, the concept of maximum channel sinuosity is introduced in order to explain the changes of river channel plan form due to changes in flow discharges and sediment loads induced by global warming and climate changes.Keywords: river channel meandering, sinuosity, radius of curvature, meander arc length, uniform excess energy theory, transverse energy loss, transverse bed slope, flow discharges, sediment loads, grain size, climate change, global warming
Procedia PDF Downloads 223917 Tourism Management of the Heritage and Archaeological Sites in Egypt
Authors: Sabry A. El Azazy
Abstract:
The archaeological heritage sites are one of the most important touristic attractions worldwide. Egypt has various archaeological sites and historical locations that are classified within the list of the archaeological heritage destinations in the world, such as Cairo, Luxor, Aswan, Alexandria, and Sinai. This study focuses on how to manage the archaeological sites and provide them with all services according to the traveler's needs. Tourism management depends on strategic planning for supporting the national economy and sustainable development. Additionally, tourism management has to utilize highly effective standards of security, promotion, advertisement, sales, and marketing while taking into consideration the preservation of monuments. In Egypt, the archaeological heritage sites must be well-managed and protected, which would assist tourism management, especially in times of crisis. Recently, the monumental places and archeological heritage sites were affected by unstable conditions and were threatened. It is essential to focus on preserving our heritage. Moreover, more efforts and cooperation between the tourism organizations and ministry of archaeology have to be done in order to protect the archaeology and promote the tourism industry. Methodology: Qualitative methods have been used as the overall approach to this study. Interviews and observations have provided the researcher with the required in-depth insight to the research subject. The researcher was a lecturer of tourist guidance that allows visiting all historical sites in Egypt. Additionally, the researcher had the privilege to communicate with tourism specialists and attend meetings, conferences, and events that were focused on the research subject. Objectives: The main purpose of the research was gaining information in order to develop theoretical research on how to effectively benefit out of those historical sights both economically and culturally, and pursue further researches and scientific studies to be well-suited for tourism and hospitality sector. The researcher works hard to present further studies in a field related to tourism and archaeological heritage using previous experience. Pursing this course of study enables the researcher to acquire the necessary abilities and competencies to achieve the set goal successfully. Results: The professional tourism management focus on making Egypt one of the most important destinations in the world, and provide the heritage and archaeological sites with all services that will place those locations into the international map of tourism. Tourists interested in visiting Egypt and making tourism flourish supports and strengths Egypt's national economy and the local community, taking into consideration preserving our heritage and archaeology. Conclusions: Egypt has many tourism attractions represented in the heritage, archaeological sites, and touristic places. These places need more attention and efforts to be included in tourism programs and be opened for visitors from all over the world. These efforts will encourage both local and international tourism to see our great civilization and provide different touristic activities.Keywords: archaeology, archaeological sites, heritage, ministry of archaeology, national economy, touristic attractions, tourism management, tourism organizations
Procedia PDF Downloads 144916 Understanding Project Failures in Construction: The Critical Impact of Financial Capacity
Authors: Nnadi Ezekiel Oluwaseun Ejiofor
Abstract:
This research investigates the effects of poor cost estimation, material cost variations, and payment punctuality on the financial health and execution of construction projects in Nigeria. To achieve the objectives of the study, a quantitative research approach was employed, and data was gathered through an online survey of 74 construction industry professionals consisting of quantity surveyors, contractors, and other professionals. The study surveyed input on cost estimation errors, price fluctuations, and payment delays, among other factors. The responses of the respondents were analyzed using a five-point Likert scale and the Relative Importance Index (RII). The findings demonstrated that the errors in cost estimating in the Bill of Quantity (BOQ) have a high degree of negative impact on the reputation and image of the participants in the projects. The greatest effect was experienced on the likelihood of obtaining future endeavors for contractors (mean value = 3.42), followed by the likelihood of obtaining new commissions by quantity surveyors (mean value = 3.40). The level of inaccuracy in costing that undershoots exposes them to risks was most serious in terms of easement of construction and effects of shortage of funds to pursue bankruptcy (hence fears of mean value = 3.78). There was also considerable financial damage as a result of cost underestimation, whereby contractors suffered the worst loss in profit (mean value = 3.88). Every expense comes with its own peculiar risk and uncertainty. Pressure on the cost of materials and every other expense attributed to the building and completion of a structure adds risks to the performance figures of a project. The greatest weight (mean importance score = 4.92) was attributed to issues like market inflation in building materials, while the second greatest weight (mean importance score = 4.76) was due to increased transportation charges. On the other hand, delays in payments arising from issues of the clients like poor availability of funds (RII=0.71) and contracting issues such as disagreements on the valuation of works done (RII=0.72) or other reasons were also found to lead to project delays and additional costs. The results affirm the importance of proper cost estimation on the health of organization finances and project risks and finishes within set time limits. As for the suggestions, it is proposed to progress on the methods of costing, engender better communications with the stakeholders, and manage the delays by way of contracting and financial control. This study enhances the existing literature on construction project management by suggesting ways to deal with adverse cost inaccuracies and availability of materials due to delays in payments which, if addressed, would greatly improve the economic performance of the construction business.Keywords: cost estimation, construction project management, material price fluctuations, payment delays, financial impact
Procedia PDF Downloads 9915 Temporal and Spatio-Temporal Stability Analyses in Mixed Convection of a Viscoelastic Fluid in a Porous Medium
Authors: P. Naderi, M. N. Ouarzazi, S. C. Hirata, H. Ben Hamed, H. Beji
Abstract:
The stability of mixed convection in a Newtonian fluid medium heated from below and cooled from above, also known as the Poiseuille-Rayleigh-Bénard problem, has been extensively investigated in the past decades. To our knowledge, mixed convection in porous media has received much less attention in the published literature. The present paper extends the mixed convection problem in porous media for the case of a viscoelastic fluid flow owing to its numerous environmental and industrial applications such as the extrusion of polymer fluids, solidification of liquid crystals, suspension solutions and petroleum activities. Without a superimposed through-flow, the natural convection problem of a viscoelastic fluid in a saturated porous medium has already been treated. The effects of the viscoelastic properties of the fluid on the linear and nonlinear dynamics of the thermoconvective instabilities have also been treated in this work. Consequently, the elasticity of the fluid can lead either to a Hopf bifurcation, giving rise to oscillatory structures in the strongly elastic regime, or to a stationary bifurcation in the weakly elastic regime. The objective of this work is to examine the influence of the main horizontal flow on the linear and characteristics of these two types of instabilities. Under the Boussinesq approximation and Darcy's law extended to a viscoelastic fluid, a temporal stability approach shows that the conditions for the appearance of longitudinal rolls are identical to those found in the absence of through-flow. For the general three-dimensional (3D) perturbations, a Squire transformation allows the deduction of the complex frequencies associated with the 3D problem using those obtained by solving the two-dimensional one. The numerical resolution of the eigenvalue problem concludes that the through-flow has a destabilizing effect and selects a convective configuration organized in purely transversal rolls which oscillate in time and propagate in the direction of the main flow. In addition, by using the mathematical formalism of absolute and convective instabilities, we study the nature of unstable three-dimensional disturbances. It is shown that for a non-vanishing through-flow, general three-dimensional instabilities are convectively unstable which means that in the absence of a continuous noise source these instabilities are drifted outside the porous medium, and no long-term pattern is observed. In contrast, purely transversal rolls may exhibit a transition to absolute instability regime and therefore affect the porous medium everywhere including in the absence of a noise source. The absolute instability threshold, the frequency and the wave number associated with purely transversal rolls are determined as a function of the Péclet number and the viscoelastic parameters. Results are discussed and compared to those obtained from laboratory experiments in the case of Newtonian fluids.Keywords: instability, mixed convection, porous media, and viscoelastic fluid
Procedia PDF Downloads 341914 Exploring the Motivations That Drive Paper Use in Clinical Practice Post-Electronic Health Record Adoption: A Nursing Perspective
Authors: Sinead Impey, Gaye Stephens, Lucy Hederman, Declan O'Sullivan
Abstract:
Continued paper use in the clinical area post-Electronic Health Record (EHR) adoption is regularly linked to hardware and software usability challenges. Although paper is used as a workaround to circumvent challenges, including limited availability of a computer, this perspective does not consider the important role paper, such as the nurses’ handover sheet, play in practice. The purpose of this study is to confirm the hypothesis that paper use post-EHR adoption continues as paper provides both a cognitive tool (that assists with workflow) and a compensation tool (to circumvent usability challenges). Distinguishing the different motivations for continued paper-use could assist future evaluations of electronic record systems. Methods: Qualitative data were collected from three clinical care environments (ICU, general ward and specialist day-care) who used an electronic record for at least 12 months. Data were collected through semi-structured interviews with 22 nurses. Data were transcribed, themes extracted using an inductive bottom-up coding approach and a thematic index constructed. Findings: All nurses interviewed continued to use paper post-EHR adoption. While two distinct motivations for paper use post-EHR adoption were confirmed by the data - paper as a cognitive tool and paper as a compensation tool - further finding was that there was an overlap between the two uses. That is, paper used as a compensation tool could also be adapted to function as a cognitive aid due to its nature (easy to access and annotate) or vice versa. Rather than present paper persistence as having two distinctive motivations, it is more useful to describe it as presenting on a continuum with compensation tool and cognitive tool at either pole. Paper as a cognitive tool referred to pages such as nurses’ handover sheet. These did not form part of the patient’s record, although information could be transcribed from one to the other. Findings suggest that although the patient record was digitised, handover sheets did not fall within this remit. These personal pages continued to be useful post-EHR adoption for capturing personal notes or patient information and so continued to be incorporated into the nurses’ work. Comparatively, the paper used as a compensation tool, such as pre-printed care plans which were stored in the patient's record, appears to have been instigated in reaction to usability challenges. In these instances, it is expected that paper use could reduce or cease when the underlying problem is addressed. There is a danger that as paper affords nurses a temporary information platform that is mobile, easy to access and annotate, its use could become embedded in clinical practice. Conclusion: Paper presents a utility to nursing, either as a cognitive or compensation tool or combination of both. By fully understanding its utility and nuances, organisations can avoid evaluating all incidences of paper use (post-EHR adoption) as arising from usability challenges. Instead, suitable remedies for paper-persistence can be targeted at the root cause.Keywords: cognitive tool, compensation tool, electronic record, handover sheet, nurse, paper persistence
Procedia PDF Downloads 442913 Polarimetric Study of System Gelatin / Carboxymethylcellulose in the Food Field
Authors: Sihem Bazid, Meriem El Kolli, Aicha Medjahed
Abstract:
Proteins and polysaccharides are the two types of biopolymers most frequently used in the food industry to control the mechanical properties and structural stability and organoleptic properties of the products. The textural and structural properties of these two types of blend polymers depend on their interaction and their ability to form organized structures. From an industrial point of view, a better understanding of mixtures protein / polysaccharide is an important issue since they are already heavily involved in processed food. It is in this context that we have chosen to work on a model system composed of a fibrous protein mixture (gelatin)/anionic polysaccharide (sodium carboxymethylcellulose). Gelatin, one of the most popular biopolymers, is widely used in food, pharmaceutical, cosmetic and photographic applications, because of its unique functional and technological properties. Sodium Carboxymethylcellulose (NaCMC) is an anionic linear polysaccharide derived from cellulose. It is an important industrial polymer with a wide range of applications. The functional properties of this anionic polysaccharide can be modified by the presence of proteins with which it might interact. Another factor may also manage the interaction of protein-polysaccharide mixtures is the triple helix of the gelatin. Its complex synthesis method results in an extracellular assembly containing several levels. Collagen can be in a soluble state or associate into fibrils, which can associate in fiber. Each level corresponds to an organization recognized by the cellular and metabolic system. Gelatin allows this approach, the formation of gelatin gel has triple helical folding of denatured collagen chains, this gel has been the subject of numerous studies, and it is now known that the properties depend only on the rate of triple helices forming the network. Chemical modification of this system is quite controlled. Observe the dynamics of the triple helix may be relevant in understanding the interactions involved in protein-polysaccharides mixtures. Gelatin is central to any industrial process, understand and analyze the molecular dynamics induced by the triple helix in the transitions gelatin, can have great economic importance in all fields and especially the food. The goal is to understand the possible mechanisms involved depending on the nature of the mixtures obtained. From a fundamental point of view, it is clear that the protective effect of NaCMC on gelatin and conformational changes of the α helix are strongly influenced by the nature of the medium. Our goal is to minimize the maximum the α helix structure changes to maintain more stable gelatin and protect against denaturation that occurs during such conversion processes in the food industry. In order to study the nature of interactions and assess the properties of mixtures, polarimetry was used to monitor the optical parameters and to assess the rate of helicity gelatin.Keywords: gelatin, sodium carboxymethylcellulose, interaction gelatin-NaCMC, the rate of helicity, polarimetry
Procedia PDF Downloads 313912 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography
Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner
Abstract:
Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.Keywords: CBCT, C-arm, reconstruction, trajectory optimization
Procedia PDF Downloads 132911 The Influence of Argumentation Strategy on Student’s Web-Based Argumentation in Different Scientific Concepts
Authors: Xinyue Jiao, Yu-Ren Lin
Abstract:
Argumentation is an essential aspect of scientific thinking which has been widely concerned in recent reform of science education. The purpose of the present studies was to explore the influences of two variables termed ‘the argumentation strategy’ and ‘the kind of science concept’ on student’s web-based argumentation. The first variable was divided into either monological (which refers to individual’s internal discourse and inner chain reasoning) or dialectical (which refers to dialogue interaction between/among people). The other one was also divided into either descriptive (i.e., macro-level concept, such as phenomenon can be observed and tested directly) or theoretical (i.e., micro-level concept which is abstract, and cannot be tested directly in nature). The present study applied the quasi-experimental design in which 138 7th grade students were invited and then assigned to either monological group (N=70) or dialectical group (N=68) randomly. An argumentation learning program called ‘the PWAL’ was developed to improve their scientific argumentation abilities, such as arguing from multiple perspectives and based on scientific evidence. There were two versions of PWAL created. For the individual version, students can propose argument only through knowledge recall and self-reflecting process. On the other hand, the students were allowed to construct arguments through peers’ communication in the collaborative version. The PWAL involved three descriptive science concept-based topics (unit 1, 3 and 5) and three theoretical concept-based topics (unit 2, 4 and 6). Three kinds of scaffoldings were embedded into the PWAL: a) argument template, which was used for constructing evidence-based argument; b) the model of the Toulmin’s TAP, which shows the structure and elements of a sound argument; c) the discussion block, which enabled the students to review what had been proposed during the argumentation. Both quantitative and qualitative data were collected and analyzed. An analytical framework for coding students’ arguments proposed in the PWAL was constructed. The results showed that the argumentation approach has a significant effect on argumentation only in theoretical topics (f(1, 136)=48.2, p < .001, η2=2.62). The post-hoc analysis showed the students in the collaborative group perform significantly better than the students in the individual group (mean difference=2.27). However, there is no significant difference between the two groups regarding their argumentation in descriptive topics. Secondly, the students made significant progress in the PWAL from the earlier descriptive or theoretical topic to the later one. The results enabled us to conclude that the PWAL was effective for students’ argumentation. And the students’ peers’ interaction was essential for students to argue scientifically especially for the theoretical topic. The follow-up qualitative analysis showed student tended to generate arguments through critical dialogue interactions in the theoretical topic which promoted them to use more critiques and to evaluate and co-construct each other’s arguments. More explanations regarding the students’ web-based argumentation and the suggestions for the development of web-based science learning were proposed in our discussions.Keywords: argumentation, collaborative learning, scientific concepts, web-based learning
Procedia PDF Downloads 104910 Recognition of Spelling Problems during the Text in Progress: A Case Study on the Comments Made by Portuguese Students Newly Literate
Authors: E. Calil, L. A. Pereira
Abstract:
The acquisition of orthography is a complex process, involving both lexical and grammatical questions. This learning occurs simultaneously with the domain of multiple textual aspects (e.g.: graphs, punctuation, etc.). However, most of the research on orthographic acquisition focus on this acquisition from an autonomous point of view, separated from the process of textual production. This means that their object of analysis is the production of words selected by the researcher or the requested sentences in an experimental and controlled setting. In addition, the analysis of the Spelling Problems (SP) are identified by the researcher on the sheet of paper. Considering the perspective of Textual Genetics, from an enunciative approach, this study will discuss the SPs recognized by dyads of newly literate students, while they are writing a text collaboratively. Six proposals of textual production were registered, requested by a 2nd year teacher of a Portuguese Primary School between January and March 2015. In our case study we discuss the SPs recognized by the dyad B and L (7 years old). We adopted as a methodological tool the Ramos System audiovisual record. This system allows real-time capture of the text in process and of the face-to-face dialogue between both students and their teacher, and also captures the body movements and facial expressions of the participants during textual production proposals in the classroom. In these ecological conditions of multimodal registration of collaborative writing, we could identify the emergence of SP in two dimensions: i. In the product (finished text): SP identification without recursive graphic marks (without erasures) and the identification of SPs with erasures, indicating the recognition of SP by the student; ii. In the process (text in progress): identification of comments made by students about recognized SPs. Given this, we’ve analyzed the comments on identified SPs during the text in progress. These comments characterize a type of reformulation referred to as Commented Oral Erasure (COE). The COE has two enunciative forms: Simple Comment (SC) such as ' 'X' is written with 'Y' '; or Unfolded Comment (UC), such as ' 'X' is written with 'Y' because...'. The spelling COE may also occur before or during the SP (Early Spelling Recognition - ESR) or after the SP has been entered (Later Spelling Recognition - LSR). There were 631 words entered in the 6 stories written by the B-L dyad, 145 of them containing some type of SP. During the text in progress, the students recognized orally 174 SP, 46 of which were identified in advance (ESRs) and 128 were identified later (LSPs). If we consider that the 88 erasure SPs in the product indicate some form of SP recognition, we can observe that there were twice as many SPs recognized orally. The ESR was characterized by SC when students asked their colleague or teacher how to spell a given word. The LSR presented predominantly UC, verbalizing meta-orthographic arguments, mostly made by L. These results indicate that writing in dyad is an important didactic strategy for the promotion of metalinguistic reflection, favoring the learning of spelling.Keywords: collaborative writing, erasure, learning, metalinguistic awareness, spelling, text production
Procedia PDF Downloads 163909 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin
Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa
Abstract:
Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®
Procedia PDF Downloads 131908 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis
Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero
Abstract:
Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.Keywords: chemometrics, microNIR, microplastics, urban plastic waste
Procedia PDF Downloads 165907 Integration of Gravity and Seismic Methods in the Geometric Characterization of a Dune Reservoir: Case of the Zouaraa Basin, NW Tunisia
Authors: Marwa Djebbi, Hakim Gabtni
Abstract:
Gravity is a continuously advancing method that has become a mature technology for geological studies. Increasingly, it has been used to complement and constrain traditional seismic data and even used as the only tool to get information of the sub-surface. In fact, in some regions the seismic data, if available, are of poor quality and hard to be interpreted. Such is the case for the current study area. The Nefza zone is part of the Tellian fold and thrust belt domain in the north west of Tunisia. It is essentially made of a pile of allochthonous units resulting from a major Neogene tectonic event. Its tectonic and stratigraphic developments have always been subject of controversies. Considering the geological and hydrogeological importance of this area, a detailed interdisciplinary study has been conducted integrating geology, seismic and gravity techniques. The interpretation of Gravity data allowed the delimitation of the dune reservoir and the identification of the regional lineaments contouring the area. It revealed the presence of three gravity lows that correspond to the dune of Zouara and Ouchtata separated along with a positive gravity axis espousing the Ain Allega_Aroub Er Roumane axe. The Bouguer gravity map illustrated the compartmentalization of the Zouara dune into two depressions separated by a NW-SE anomaly trend. This constitution was confirmed by the vertical derivative map which showed the individualization of two depressions with slightly different anomaly values. The horizontal gravity gradient magnitude was performed in order to determine the different geological features present in the studied area. The latest indicated the presence of NE-SW parallel folds according to the major Atlasic direction. Also, NW-SE and EW trends were identified. The maxima tracing confirmed this direction by the presence of NE-SW faults, mainly the Ghardimaou_Cap Serrat accident. The quality of the available seismic sections and the absence of borehole data in the region, except few hydraulic wells that been drilled and showing the heterogeneity of the substratum of the dune, required the process of gravity modeling of this challenging area that necessitates to be modeled for the geometrical characterization of the dune reservoir and determine the different stratigraphic series underneath these deposits. For more detailed and accurate results, the scale of study will be reduced in coming research. A more concise method will be elaborated; the 4D microgravity survey. This approach is considered as an expansion of gravity method and its fourth dimension is time. It will allow a continuous and repeated monitoring of fluid movement in the subsurface according to the micro gal (μgall) scale. The gravity effect is a result of a monthly variation of the dynamic groundwater level which correlates with rainfall during different periods.Keywords: 3D gravity modeling, dune reservoir, heterogeneous substratum, seismic interpretation
Procedia PDF Downloads 298