Search results for: mixed embeddedness model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19060

Search results for: mixed embeddedness model

9370 The Carbon Footprint Model as a Plea for Cities towards Energy Transition: The Case of Algiers Algeria

Authors: Hachaichi Mohamed Nour El-Islem, Baouni Tahar

Abstract:

Environmental sustainability rather than a trans-disciplinary and a scientific issue, is the main problem that characterizes all modern cities nowadays. In developing countries, this concern is expressed in a plethora of critical urban ills: traffic congestion, air pollution, noise, urban decay, increase in energy consumption and CO2 emissions which blemish cities’ landscape and might threaten citizens’ health and welfare. As in the same manner as developing world cities, the rapid growth of Algiers’ human population and increasing in city scale phenomena lead eventually to increase in daily trips, energy consumption and CO2 emissions. In addition, the lack of proper and sustainable planning of the city’s infrastructure is one of the most relevant issues from which Algiers suffers. The aim of this contribution is to estimate the carbon deficit of the City of Algiers, Algeria, using the Ecological Footprint Model (carbon footprint). In order to achieve this goal, the amount of CO2 from fuel combustion has been calculated and aggregated into five sectors (agriculture, industry, residential, tertiary and transportation); as well, Algiers’ biocapacity (CO2 uptake land) has been calculated to determine the ecological overshoot. This study shows that Algiers’ transport system is not sustainable and is generating more than 50% of Algiers total carbon footprint which cannot be sequestered by the local forest land. The aim of this research is to show that the Carbon Footprint Assessment might be a relevant indicator to design sustainable strategies/policies striving to reduce CO2 by setting in motion the energy consumption in the transportation sector and reducing the use of fossil fuels as the main energy input.

Keywords: biocapacity, carbon footprint, ecological footprint assessment, energy consumption

Procedia PDF Downloads 147
9369 Understanding the Utilization of Luffa Cylindrica in the Adsorption of Heavy Metals to Clean Up Wastewater

Authors: Akanimo Emene, Robert Edyvean

Abstract:

In developing countries, a low cost method of wastewater treatment is highly recommended. Adsorption is an efficient and economically viable treatment process for wastewater. The utilisation of this process is based on the understanding of the relationship between the growth environment and the metal capacity of the biomaterial. Luffa cylindrica (LC), a plant material, was used as an adsorbent in adsorption design system of heavy metals. The chemically modified LC was used to adsorb heavy metals ions, lead and cadmium, from aqueous environmental solution at varying experimental conditions. Experimental factors, adsorption time, initial metal ion concentration, ionic strength and pH of solution were studied. The chemical nature and surface area of the tissues adsorbing heavy metals in LC biosorption systems were characterised by using electron microscopy and infra-red spectroscopy. It showed an increase in the surface area and improved adhesion capacity after chemical treatment. Metal speciation of the metal ions showed the binary interaction between the ions and the LC surface as the pH increases. Maximum adsorption was shown between pH 5 and pH 6. The ionic strength of the metal ion solution has an effect on the adsorption capacity based on the surface charge and the availability of the adsorption sites on the LC. The nature of the metal-surface complexes formed as a result of the experimental data were analysed with kinetic and isotherm models. The pseudo second order kinetic model and the two-site Langmuir isotherm model showed the best fit. Through the understanding of this process, there will be an opportunity to provide an alternative method for water purification. This will be provide an option, for when expensive water treatment technologies are not viable in developing countries.

Keywords: adsorption, luffa cylindrica, metal-surface complexes, pH

Procedia PDF Downloads 89
9368 Prevalence of Cyp2d6 and Its Implications for Personalized Medicine in Saudi Arabs

Authors: Hamsa T. Tayeb, Mohammad A. Arafah, Dana M. Bakheet, Duaa M. Khalaf, Agnieszka Tarnoska, Nduna Dzimiri

Abstract:

Background: CYP2D6 is a member of the cytochrome P450 mixed-function oxidase system. The enzyme is responsible for the metabolism and elimination of approximately 25% of clinically used drugs, especially in breast cancer and psychiatric therapy. Different phenotypes have been described displaying alleles that lead to a complete loss of enzyme activity, reduced function (poor metabolizers – PM), hyperfunctionality (ultrarapid metabolizers–UM) and therefore drug intoxication or loss of drug effect. The prevalence of these variants may vary among different ethnic groups. Furthermore, the xTAG system has been developed to categorized all patients into different groups based on their CYP2D6 substrate metabolization. Aim of the study: To determine the prevalence of the different CYP2D6 variants in our population, and to evaluate their clinical relevance in personalized medicine. Methodology: We used the Luminex xMAP genotyping system to sequence 305 Saudi individuals visiting the Blood Bank of our Institution and determine which polymorphisms of CYP2D6 gene are prevalent in our region. Results: xTAG genotyping showed that 36.72% (112 out of 305 individuals) carried the CYP2D6_*2. Out of the 112 individuals with the *2 SNP, 6.23% had multiple copies of *2 SNP (19 individuals out of 305 individuals), resulting in an UM phenotype. About 33.44% carried the CYP2D6_*41, which leads to decreased activity of the CYP2D6 enzyme. 19.67% had the wild-type alleles and thus had normal enzyme function. Furthermore, 15.74% carried the CYP2D6_*4, which is the most common nonfunctional form of the CYP2D6 enzyme worldwide. 6.56% carried the CYP2D6_*17, resulting in decreased enzyme activity. Approximately 5.73% carried the CYP2D6_*10, consequently decreasing the enzyme activity, resulting in a PM phenotype. 2.30% carried the CYP2D6_*29, leading to decreased metabolic activity of the enzyme, and 2.30% carried the CYP2D6_*35, resulting in an UM phenotype, 1.64% had a whole-gene deletion CYP2D6_*5, thus resulting in the loss of CYP2D6 enzyme production, 0.66% carried the CYP2D6_*6 variant. One individual carried the CYP2D6_*3(B), producing an inactive form of the enzyme, which leads to decrease of enzyme activity, resulting in a PM phenotype. Finally, one individual carried the CYP2D6_*9, which decreases the enzyme activity. Conclusions: Our study demonstrates that different CYP2D6 variants are highly prevalent in ethnic Saudi Arabs. This finding sets a basis for informed genotyping for these variants in personalized medicine. The study also suggests that xTAG is an appropriate procedure for genotyping the CYP2D6 variants in personalized medicine.

Keywords: CYP2D6, hormonal breast cancer, pharmacogenetics, polymorphism, psychiatric treatment, Saudi population

Procedia PDF Downloads 572
9367 Developing a DNN Model for the Production of Biogas From a Hybrid BO-TPE System in an Anaerobic Wastewater Treatment Plant

Authors: Hadjer Sadoune, Liza Lamini, Scherazade Krim, Amel Djouadi, Rachida Rihani

Abstract:

Deep neural networks are highly regarded for their accuracy in predicting intricate fermentation processes. Their ability to learn from a large amount of datasets through artificial intelligence makes them particularly effective models. The primary obstacle in improving the performance of these models is to carefully choose the suitable hyperparameters, including the neural network architecture (number of hidden layers and hidden units), activation function, optimizer, learning rate, and other relevant factors. This study predicts biogas production from real wastewater treatment plant data using a sophisticated approach: hybrid Bayesian optimization with a tree-structured Parzen estimator (BO-TPE) for an optimised deep neural network (DNN) model. The plant utilizes an Upflow Anaerobic Sludge Blanket (UASB) digester that treats industrial wastewater from soft drinks and breweries. The digester has a working volume of 1574 m3 and a total volume of 1914 m3. Its internal diameter and height were 19 and 7.14 m, respectively. The data preprocessing was conducted with meticulous attention to preserving data quality while avoiding data reduction. Three normalization techniques were applied to the pre-processed data (MinMaxScaler, RobustScaler and StandardScaler) and compared with the Non-Normalized data. The RobustScaler approach has strong predictive ability for estimating the volume of biogas produced. The highest predicted biogas volume was 2236.105 Nm³/d, with coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) values of 0.712, 164.610, and 223.429, respectively.

Keywords: anaerobic digestion, biogas production, deep neural network, hybrid bo-tpe, hyperparameters tuning

Procedia PDF Downloads 38
9366 Collaboration between Grower and Research Organisations as a Mechanism to Improve Water Efficiency in Irrigated Agriculture

Authors: Sarah J. C. Slabbert

Abstract:

The uptake of research as part of the diffusion or adoption of innovation by practitioners, whether individuals or organisations, has been a popular topic in agricultural development studies for many decades. In the classical, linear model of innovation theory, the innovation originates from an expert source such as a state-supported research organisation or academic institution. The changing context of agriculture led to the development of the agricultural innovation systems model, which recognizes innovation as a complex interaction between individuals and organisations, which include private industry and collective action organisations. In terms of this model, an innovation can be developed and adopted without any input or intervention from a state or parastatal research organisation. This evolution in the diffusion of agricultural innovation has put forward new challenges for state or parastatal research organisations, which have to demonstrate the impact of their research to the legislature or a regulatory authority: Unless the organisation and the research it produces cross the knowledge paths of the intended audience, there will be no awareness, no uptake and certainly no impact. It is therefore critical for such a research organisation to base its communication strategy on a thorough understanding of the knowledge needs, information sources and knowledge networks of the intended target audience. In 2016, the South African Water Research Commission (WRC) commissioned a study to investigate the knowledge needs, information sources and knowledge networks of Water User Associations and commercial irrigators with the aim of improving uptake of its research on efficient water use in irrigation. The first phase of the study comprised face-to-face interviews with the CEOs and Board Chairs of four Water User Associations along the Orange River in South Africa, and 36 commercial irrigation farmers from the same four irrigation schemes. Intermediaries who act as knowledge conduits to the Water User Associations and the irrigators were identified and 20 of them were subsequently interviewed telephonically. The study found that irrigators interact regularly with grower organisations such as SATI (South African Table Grape Industry) and SAPPA (South African Pecan Nut Association) and that they perceive these organisations as credible, trustworthy and reliable, within their limitations. State and parastatal research institutions, on the other hand, are associated with a range of negative attributes. As a result, the awareness of, and interest in, the WRC and its research on water use efficiency in irrigated agriculture are low. The findings suggest that a communication strategy that involves collaboration with these grower organisations would empower the WRC to participate much more efficiently and with greater impact in agricultural innovation networks. The paper will elaborate on the findings and discuss partnering frameworks and opportunities to manage perceptions and uptake.

Keywords: agricultural innovation systems, communication strategy, diffusion of innovation, irrigated agriculture, knowledge paths, research organisations, target audiences, water use efficiency

Procedia PDF Downloads 113
9365 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 137
9364 Investment and Economic Growth: An Empirical Analysis for Tanzania

Authors: Manamba Epaphra

Abstract:

This paper analyzes the causal effect between domestic private investment, public investment, foreign direct investment and economic growth in Tanzania during the 1970-2014 period. The modified neo-classical growth model that includes control variables such as trade liberalization, life expectancy and macroeconomic stability proxied by inflation is used to estimate the impact of investment on economic growth. Also, the economic growth models based on Phetsavong and Ichihashi (2012), and Le and Suruga (2005) are used to estimate the crowding out effect of public investment on private domestic investment on one hand and foreign direct investment on the other hand. A correlation test is applied to check the correlation among independent variables, and the results show that there is very low correlation suggesting that multicollinearity is not a serious problem. Moreover, the diagnostic tests including RESET regression errors specification test, Breusch-Godfrey serial correlation LM test, Jacque-Bera-normality test and white heteroskedasticity test reveal that the model has no signs of misspecification and that, the residuals are serially uncorrelated, normally distributed and homoskedastic. Generally, the empirical results show that the domestic private investment plays an important role in economic growth in Tanzania. FDI also tends to affect growth positively, while control variables such as high population growth and inflation appear to harm economic growth. Results also reveal that control variables such as trade openness and life expectancy improvement tend to increase real GDP growth. Moreover, a revealed negative, albeit weak, association between public and private investment suggests that the positive effect of domestic private investment on economic growth reduces when public investment-to-GDP ratio exceeds 8-10 percent. Thus, there is a great need for promoting domestic saving so as to encourage domestic investment for economic growth.

Keywords: FDI, public investment, domestic private investment, crowding out effect, economic growth

Procedia PDF Downloads 290
9363 Utilization of Standard Paediatric Observation Chart to Evaluate Infants under Six Months Presenting with Non-Specific Complaints

Authors: Michael Zhang, Nicholas Marriage, Valerie Astle, Marie-Louise Ratican, Jonathan Ash, Haddijatou Hughes

Abstract:

Objective: Young infants are often brought to the Emergency Department (ED) with a variety of complaints, some of them are non-specific and present as a diagnostic challenge to the attending clinician. Whilst invasive investigations such as blood tests and lumbar puncture are necessary in some cases to exclude serious infections, some basic clinical tools in additional to thorough clinical history can be useful to assess the risks of serious conditions in these young infants. This study aimed to examine the utilization of one of clinical tools in this regard. Methods: This retrospective observational study examined the medical records of infants under 6 months presenting to a mixed urban ED between January 2013 and December 2014. The infants deemed to have non-specific complaints or diagnoses by the emergency clinicians were selected for analysis. The ones with clear systemic diagnoses were excluded. Among all relevant clinical information and investigation results, utilization of Standard Paediatric Observation Chart (SPOC) was particularly scrutinized in these medical records. This specific chart was developed by the expert clinicians in local health department. It categorizes important clinical signs into some color-coded zones as a visual cue for serious implication of some abnormalities. An infant is regarded as SPOC positive when fulfills 1 red zone or 2 yellow zones criteria, and the attending clinician would be prompted to investigate and treat for potential serious conditions accordingly. Results: Eight hundred and thirty-five infants met the inclusion criteria for this project. The ones admitted to the hospital for further management were more likely to have SPOC positive criteria than the discharged infants (Odds ratio: 12.26, 95% CI: 8.04 – 18.69). Similarly, Sepsis alert criteria on SPOC were positive in a higher percentage of patients with serious infections (56.52%) in comparison to those with mild conditions (15.89%) (p < 0.001). The SPOC sepsis criteria had a sensitivity of 56.5% (95% CI: 47.0% - 65.7%) and a moderate specificity of 84.1% (95% CI: 80.8% - 87.0%) to identify serious infections. Applying to this infant population, with a 17.4% prevalence of serious infection, the positive predictive value was only 42.8% (95% CI: 36.9% - 49.0%). However, the negative predictive value was high at 90.2% (95% CI: 88.1% - 91.9%). Conclusions: Standard Paediatric Observation Chart has been applied as a useful clinical tool in the clinical practice to help identify and manage young sick infants in ED effectively.

Keywords: clinical tool, infants, non-specific complaints, Standard Paediatric Observation Chart

Procedia PDF Downloads 252
9362 Transient Simulation Using SPACE for ATLAS Facility to Investigate the Effect of Heat Loss on Major Parameters

Authors: Suhib A. Abu-Seini, Kyung-Doo Kim

Abstract:

A heat loss model for ATLAS facility was introduced using SPACE code predefined correlations and various dialing factors. As all previous simulations were carried out using a heat loss free input; the facility was considered to be completely insulated and the core power was reduced by the experimentally measured values of heat loss to compensate to the account for the loss of heat, this study will consider heat loss throughout the simulation. The new heat loss model will be affecting SPACE code simulation as heat being leaked out of the system throughout a transient will alter many parameters corresponding to temperature and temperature difference. For that, a Station Blackout followed by a multiple Steam Generator Tube Rupture accident will be simulated using both the insulated system approach and the newly introduced heat loss input of the steady state. Major parameters such as system temperatures, pressure values, and flow rates to be put into comparison and various analysis will be suggested upon it as the experimental values will not be the reference to validate the expected outcome. This study will not only show the significance of heat loss consideration in the processes of prevention and mitigation of various incidents, design basis and beyond accidents as it will give a detailed behavior of ATLAS facility during both processes of steady state and major transient, but will also present a verification of how credible the data acquired of ATLAS are; since heat loss values for steady state were already mismatched between SPACE simulation results and ATLAS data acquiring system. Acknowledgement- This work was supported by the Korean institute of Energy Technology Evaluation and Planning (KETEP) and the Ministry of Trade, Industry & Energy (MOTIE) of the Republic of Korea.

Keywords: ATLAS, heat loss, simulation, SPACE, station blackout, steam generator tube rupture, verification

Procedia PDF Downloads 224
9361 Referencing Anna: Findings From Eye-tracking During Dutch Pronoun Resolution

Authors: Robin Devillers, Chantal van Dijk

Abstract:

Children face ambiguities in everyday language use. Particularly ambiguity in pronoun resolution can be challenging, whereas adults can rapidly identify the antecedent of the mentioned pronoun. Two main factors underlie this process, namely the accessibility of the referent and the syntactic cues of the pronoun. After 200ms, adults have converged the accessibility and the syntactic constraints, while relieving cognitive effort by considering contextual cues. As children are still developing their cognitive capacity, they are not able yet to simultaneously assess and integrate accessibility, contextual cues and syntactic information. As such, they fail to identify the correct referent and possibly fixate more on the competitor in comparison to adults. In this study, Dutch while-clauses were used to investigate the interpretation of pronouns by children. The aim is to a) examine the extent to which 7-10 year old children are able to utilise discourse and syntactic information during online and offline sentence processing and b) analyse the contribution of individual factors, including age, working memory, condition and vocabulary. Adult and child participants are presented with filler-items and while-clauses, and the latter follows a particular structure: ‘Anna and Sophie are sitting in the library. While Anna is reading a book, she is taking a sip of water.’ This sentence illustrates the ambiguous situation, as it is unclear whether ‘she’ refers to Anna or Sophie. In the unambiguous situation, either Anna or Sophie would be substituted by a boy, such as ‘Peter’. The pronoun in the second sentence will unambiguously refer to one of the characters due to the syntactic constraints of the pronoun. Children’s and adults’ responses were measured by means of a visual world paradigm. This paradigm consisted of two characters, of which one was the referent (the target) and the other was the competitor. A sentence was presented and followed by a question, which required the participant to choose which character was the referent. Subsequently, this paradigm yields an online (fixations) and offline (accuracy) score. These findings will be analysed using Generalised Additive Mixed Models, which allow for a thorough estimation of the individual variables. These findings will contribute to the scientific literature in several ways; firstly, the use of while-clauses has not been studied much and it’s processing has not yet been identified. Moreover, online pronoun resolution has not been investigated much in both children and adults, and therefore, this study will contribute to adults and child’s pronoun resolution literature. Lastly, pronoun resolution has not been studied yet in Dutch and as such, this study adds to the languages

Keywords: pronouns, online language processing, Dutch, eye-tracking, first language acquisition, language development

Procedia PDF Downloads 100
9360 Exploring the Nexus of Gastronomic Tourism and Its Impact on Destination Image

Authors: Usha Dinakaran, Richa Ganguly

Abstract:

Gastronomic tourism has evolved into a prominent niche within the travel industry, with tourists increasingly seeking unique culinary experiences as a primary motivation for their journeys. This research explores the intricate relationship between gastronomic tourism and its profound influence on the overall image of travel destinations. It delves into the multifaceted aspects of culinary experiences, tourists' perceptions, and the preservation of cultural identity, all of which play pivotal roles in shaping a destination's image. The primary aim of this study is to comprehensively examine the interplay between gastronomy and tourism, specifically focusing on its impact on destination image. The research seeks to achieve the following objectives: (1) Investigate how tourists perceive and engage with gastronomic tourism experiences. (2) Understand the significance of food in shaping the tourism image. (3.) Explore the connection between gastronomy and the destination's cultural identity Quantify the relationship between tourists' engagement in co-creation activities related to gastronomic tourism and their overall satisfaction with the quality of their culinary experiences. To achieve these objectives, a mixed-method research approach will be employed, including surveys, interviews, and content analysis. Data will be collected from tourists visiting diverse destinations known for their culinary offerings. This research anticipates uncovering valuable insights into the nexus between gastronomic tourism and destination image. It is expected to shed light on how tourists' perceptions of culinary experiences impact their overall perception of a destination. Additionally, the study aims to identify factors influencing tourist satisfaction and how cultural identity is preserved and promoted through gastronomic tourism. The findings of this research hold practical implications for destination marketers and stakeholders. Understanding the symbiotic relationship between gastronomy and tourism can guide the development of more targeted marketing strategies. Furthermore, promoting co-creation activities can enhance tourists' culinary experiences and contribute to the positive image of destinations.This study contributes to the growing body of knowledge regarding gastronomic tourism by consolidating insights from various studies and offering a comprehensive perspective on its impact on destination image. It offers a platform for future research in this domain and underscores the importance of culinary experiences in contemporary travel. In conclusion, this research endeavors to illuminate the dynamic interplay between gastronomic tourism and destination image, providing valuable insights for both academia and industry stakeholders in the field of tourism and hospitality.

Keywords: gastronomy, tourism, destination image, culinary

Procedia PDF Downloads 74
9359 Community-Based Palliative Care for Patients with Cerebral Palsy and Developmental Disabilities

Authors: Elizabeth Grier, Meg Gemmill, Mary Martin, Leora Reiter, Herman Tang, Alexandra Donaldson, Isis Lunsky, Mia Wu

Abstract:

Background: Individuals with Cerebral Palsy (CP) and/or IDD face numerous physical and mental health challenges, including difficulty accessing effective palliative care. The aim of this study is to assess the knowledge and comfort of healthcare providers in providing community-based palliative care for patients with Cerebral Palsy (CP) and severe to profound Intellectual and Developmental Disabilities (IDD). Methods: This study includes a mixed methods approach obtaining both quantitative and qualitative data. Quantitative data from palliative care practitioners was obtained through an online survey assessing comfort in symptom management, grief assessment, and goals of care discussion. This survey was distributed to physicians and allied health practitioners across Canada through the College of Family Physicians of Canada Member Interest Groups for Palliative Care and for IDD. Survey results guided the development of a semi-structured interview template, which was used to conduct a focus group on the same topic. Participants were four palliative care providers (3 physicians and one spiritual care practitioner). The focus group transcript is currently undergoing thematic analysis using NVivo 12 software. Results: 57 palliative care practitioners completed the survey. 87% of participants indicated they have provided palliative care services for persons with CP and/or IDD. Findings suggest practitioners are somewhat confident in identifying specific physical symptoms (dyspnea, pressure ulcers) but less confident in identifying physical/emotional pain, addressing grief, and prognosticating life expectancy in this population. 54% of responses indicated they had little/no training on palliating those with CP or IDD, and 45% somewhat or strongly disagree members of their profession can manage symptoms for this population. Focus group analysis is underway, and results will be available at the time of the poster presentation. Conclusion: Persons with CP and IDD are more likely to experience severe health inequities when accessing palliative care. Results of this study suggest further education is needed for palliative care professionals to address the barriers and challenges in providing palliative care to this patient population.

Keywords: palliative care, symptom management, health equity, community healthcare, intellectual and developmental disabilities

Procedia PDF Downloads 142
9358 Creating an Inclusive Classroom: Country Case Studies Analysis on Mainstream Teachers’ Teaching-Efficacy and Attitudes towards Inclusive Education in Japan and Singapore

Authors: Yei Mian Adrian Yap

Abstract:

How we idealize the regular schools to be inclusive as much as possible hinges on mainstream teachers’ attitudes and teaching-efficacy towards the inclusion of students with special needs in the regular schools. This research studies the Japanese and Singaporean mainstream teachers’ attitudes and teaching-efficacy towards the inclusion of students with special needs in the regular classrooms by investigating what key variables influence their attitudes and teaching-efficacy and how they strategize to address their challenges to include their students with special needs in their regular classrooms. In order to understand the nature of teachers’ attitudes and teaching-efficacy towards the inclusive education, a mixed-method research methodology was carried out in Japan and Singapore; it involved an explanatory sequential method of employing quantitative research first before qualitative research. In the quantitative research, 189 Japanese and 183 Singaporean teachers were invited to participate in the questionnaires and out of these participants, 38 Japanese and 15 Singaporean teachers shared their views during their semi-structured interviews. Based on the empirical findings, Japanese teachers’ attitudes and teaching-efficacy were more likely to be influenced by their experiences in teaching students with special needs, knowledge about disability legislation, presence of their disabled family members and level of confidence to teach students with special needs. On the other hand, Singaporean teachers’ attitudes and teaching-efficacy were affected by gender, educational level, received trainings in special needs education, knowledge about disability legislation and level of confidence to teach students with special needs. Both country results also demonstrated that there was a positive correlation between their teaching-efficacy and attitude. Narrative findings further expanded the reasons behind these quantitative factors that shaped teachers’ attitudes and teaching-efficacy. Also it discussed the various problems faced by Japanese and Singaporean teachers and how they identified their coping strategies to circumvent their challenges in including their students with special needs in their regular classrooms. The significance of this research manifests in necessary educational reforms in both countries especially in the context of inclusive education. These findings may not be as definitive as expected but it is believed that it could provide useful information on the current situation about teachers’ concerns towards the inclusive education. In conclusion, this research could potentially make its positive contribution to the body of literature on teachers’ attitudes and teaching-efficacy in the context of Asian developed countries and these findings could posit that regular teachers’ positive attitudes and strong sense of teaching self-efficacy could directly improve the success rate of inclusion of students with special needs in the regular classrooms.

Keywords: attitudes, inclusive education, special education, teaching-efficacy

Procedia PDF Downloads 342
9357 Observation of the Effect of Yingyangbao Intervention on Infants and Young Children Aged 6 to 23 Months in Poor Rural Areas of China

Authors: Jin Li, Jing Sun, Xiangkun Cai, Lijuanwang, Yanbin Tang, Junsheng Huo

Abstract:

In order to improve the malnutrition of infants and young children in poor rural areas of China, Chinese government implement a project on improvement of children's nutrition in poor rural areas. Each infant or young child aged 6 to 23 months in selected poor rural areas of China was provided a package of Yingyangbao (YYB) per day, which is a full fat soy powder mixed with multiple micronutrient powders. A technical direction to implement this project comprehensively in poor rural areas of China will be provided by assessing the nutritional status of infants and feeding practices of caregiver. The nutritional intervention was conducted using Yingyangbao for infants aged 6 to 23 months in six poor counties of Shanxi, Yunnan and Hubei Provinces. The caregiver or parents of infants were educated on feeding knowledge and practice. A total of 1840 infants were assessed before the intervention and 1789 infants one year later. The length, weight, hemoglobin concentration of infants were measured to evaluate nutritional status before and after the intervention respectively. The questionnaires were designed to collect data for the basic demographic information and feeding practices. The average weight of infants aged 6 to 23 months increased from 9.59 ± 1.54kg to 9.73 ± 1.61kg one years later (p<0.01), and the average length from 76.0±6.0 to 77.0±6.1(p<0.01). The weight and length of infants aged 12 to 17 months had most obviously improving effect among the three age groups. Before the intervention, the hemoglobin concentration value of infants was 11.7±1.2g/L, and the anemia prevalence was 32.9%. One year later, the hemoglobin concentration value of the infants was increased to 12.0±1.1g/dL, and the anemia prevalence was decreased to 26.0%. There were both statistically significant (p <0.01). The anemia prevalence of infants aged 18 to 23 months had most obviously improving effect,which decreased from 25.0% to 17.2%(p<0.01). The proportion of infants aged 6 to 8 months who received solid, semi-solid or soft foods in time was increased from 89.4% to 91.6%, while there was no statistically significant. The proportion of 6-23 month-old infants who received minimum dietary diversity increased from 55.6% to 60.3%(p <0.01). The differences of the proportion of infants who received minimum meal frequency was no statistically significant between before and after the intervention. The nutritional intervention using Yingyangbao showed the significant effect for improving infants aged 6 to 23 months anemia status, weight and length. The feeding practices were improved through education in the process of nutritional intervention, while the effect is not significant. It is need for Chinese government to explore new publicity pattern.

Keywords: nutritional intervention, infants, nutritional status, feeding practice

Procedia PDF Downloads 444
9356 Building Information Management Advantages, Adaptation, and Challenges of Implementation in Kabul Metropolitan Area

Authors: Mohammad Rahim Rahimi, Yuji Hoshino

Abstract:

Building Information Management (BIM) at recent years has widespread consideration on the Architecture, Engineering and Construction (AEC). BIM has been bringing innovation in AEC industry and has the ability to improve the construction industry with high quality, reduction time and budget of project. Meanwhile, BIM support model and process in AEC industry, the process include the project time cycle, estimating, delivery and generally the way of management of project but not limited to those. This research carried the BIM advantages, adaptation and challenges of implementation in Kabul region. Capital Region Independence Development Authority (CRIDA) have responsibilities to implement the development projects in Kabul region. The method of study were considers on advantages and reasons of BIM performance in Afghanistan based on online survey and data. Besides that, five projects were studied, the reason of consideration were many times design revises and changes. Although, most of the projects had problems regard to designing and implementation stage, hence in canal project was discussed in detail with the main reason of problems. Which were many time changes and revises due to the lack of information, planning, and management. In addition, two projects based on BIM utilization in Japan were also discussed. The Shinsuizenji Station and Oita River dam projects. Those are implemented and implementing consequently according to the BIM requirements. The investigation focused on BIM usage, project implementation process. Eventually, the projects were the comparison with CRIDA and BIM utilization in Japan. The comparison will focus on the using of the model and the way of solving the problems based upon on the BIM. In conclusion, that BIM had the capacity to prevent many times design changes and revises. On behalf of achieving those objectives are required to focus on data management and sharing, BIM training and using new technology.

Keywords: construction information management, implementation and adaptation of BIM, project management, developing countries

Procedia PDF Downloads 129
9355 Nanoparticles Activated Inflammasome Lead to Airway Hyperresponsiveness and Inflammation in a Mouse Model of Asthma

Authors: Pureun-Haneul Lee, Byeong-Gon Kim, Sun-Hye Lee, An-Soo Jang

Abstract:

Background: Nanoparticles may pose adverse health effects due to particulate matter inhalation. Nanoparticle exposure induces cell and tissue damage, causing local and systemic inflammatory responses. The inflammasome is a major regulator of inflammation through its activation of pro-caspase-1, which cleaves pro-interleukin-1β (IL-1β) into its mature form and may signal acute and chronic immune responses to nanoparticles. Objective: The aim of the study was to identify whether nanoparticles exaggerates inflammasome pathway leading to airway inflammation and hyperresponsiveness in an allergic mice model of asthma. Methods: Mice were treated with saline (sham), OVA-sensitized and challenged (OVA), or titanium dioxide nanoparticles. Lung interleukin 1 beta (IL-1β), interleukin 18 (IL-18), NACHT, LRR and PYD domains-containing protein 3 (NLRP3) and caspase-1 levels were assessed with Western Blot. Caspase-1 was checked by immunohistochemical staining. Reactive oxygen species were measured for the marker 8-isoprostane and carbonyl by ELISA. Results: Airway inflammation and hyperresponsiveness increased in OVA-sensitized/challenged mice and these responses were exaggerated by TiO2 nanoparticles exposure. TiO2 nanoparticles treatment increased IL-1β and IL-18 protein expression in OVA-sensitized/challenged mice. TiO2 nanoparticles augmented the expression of NLRP3 and caspase-1 leading to the formation of an active caspase-1 in the lung. Lung caspase-1 expression was increased in OVA-sensitized/challenged mice and these responses were exaggerated by TiO2 nanoparticles exposure. Reactive oxygen species was increased in OVA-sensitized/challenged mice and in OVA-sensitized/challenged plus TiO2 exposed mice. Conclusion: Our data demonstrate that inflammasome pathway activates in asthmatic lungs following nanoparticles exposure, suggesting that targeting the inflammasome may help control nanoparticles-induced airway inflammation and responsiveness.

Keywords: bronchial asthma, inflammation, inflammasome, nanoparticles

Procedia PDF Downloads 375
9354 The Feasibility of Glycerol Steam Reforming in an Industrial Sized Fixed Bed Reactor Using Computational Fluid Dynamic (CFD) Simulations

Authors: Mahendra Singh, Narasimhareddy Ravuru

Abstract:

For the past decade, the production of biodiesel has significantly increased along with its by-product, glycerol. Biodiesel-derived glycerol massive entry into the glycerol market has caused its value to plummet. Newer ways to utilize the glycerol by-product must be implemented or the biodiesel industry will face serious economic problems. The biodiesel industry should consider steam reforming glycerol to produce hydrogen gas. Steam reforming is the most efficient way of producing hydrogen and there is a lot of demand for it in the petroleum and chemical industries. This study investigates the feasibility of glycerol steam reforming in an industrial sized fixed bed reactor. In this paper, using computational fluid dynamic (CFD) simulations, the extent of the transport resistances that would occur in an industrial sized reactor can be visualized. An important parameter in reactor design is the size of the catalyst particle. The size of the catalyst cannot be too large where transport resistances are too high, but also not too small where an extraordinary amount of pressure drop occurs. The goal of this paper is to find the best catalyst size under various flow rates that will result in the highest conversion. Computational fluid dynamics simulated the transport resistances and a pseudo-homogenous reactor model was used to evaluate the pressure drop and conversion. CFD simulations showed that glycerol steam reforming has strong internal diffusion resistances resulting in extremely low effectiveness factors. In the pseudo-homogenous reactor model, the highest conversion obtained with a Reynolds number of 100 (29.5 kg/h) was 9.14% using a 1/6 inch catalyst diameter. Due to the low effectiveness factors and high carbon deposition rates, a fluidized bed is recommended as the appropriate reactor to carry out glycerol steam reforming.

Keywords: computational fluid dynamic, fixed bed reactor, glycerol, steam reforming, biodiesel

Procedia PDF Downloads 308
9353 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli

Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma

Abstract:

Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.

Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis

Procedia PDF Downloads 263
9352 Predicting Long-Term Performance of Concrete under Sulfate Attack

Authors: Elakneswaran Yogarajah, Toyoharu Nawa, Eiji Owaki

Abstract:

Cement-based materials have been using in various reinforced concrete structural components as well as in nuclear waste repositories. The sulfate attack has been an environmental issue for cement-based materials exposed to sulfate bearing groundwater or soils, and it plays an important role in the durability of concrete structures. The reaction between penetrating sulfate ions and cement hydrates can result in swelling, spalling and cracking of cement matrix in concrete. These processes induce a reduction of mechanical properties and a decrease of service life of an affected structure. It has been identified that the precipitation of secondary sulfate bearing phases such as ettringite, gypsum, and thaumasite can cause the damage. Furthermore, crystallization of soluble salts such as sodium sulfate crystals induces degradation due to formation and phase changes. Crystallization of mirabilite (Na₂SO₄:10H₂O) and thenardite (Na₂SO₄) or their phase changes (mirabilite to thenardite or vice versa) due to temperature or sodium sulfate concentration do not involve any chemical interaction with cement hydrates. Over the past couple of decades, an intensive work has been carried out on sulfate attack in cement-based materials. However, there are several uncertainties still exist regarding the mechanism for the damage of concrete in sulfate environments. In this study, modelling work has been conducted to investigate the chemical degradation of cementitious materials in various sulfate environments. Both internal and external sulfate attack are considered for the simulation. In the internal sulfate attack, hydrate assemblage and pore solution chemistry of co-hydrating Portland cement (PC) and slag mixing with sodium sulfate solution are calculated to determine the degradation of the PC and slag-blended cementitious materials. Pitzer interactions coefficients were used to calculate the activity coefficients of solution chemistry at high ionic strength. The deterioration mechanism of co-hydrating cementitious materials with 25% of Na₂SO₄ by weight is the formation of mirabilite crystals and ettringite. Their formation strongly depends on sodium sulfate concentration and temperature. For the external sulfate attack, the deterioration of various types of cementitious materials under external sulfate ingress is simulated through reactive transport model. The reactive transport model is verified with experimental data in terms of phase assemblage of various cementitious materials with spatial distribution for different sulfate solution. Finally, the reactive transport model is used to predict the long-term performance of cementitious materials exposed to 10% of Na₂SO₄ for 1000 years. The dissolution of cement hydrates and secondary formation of sulfate-bearing products mainly ettringite are the dominant degradation mechanisms, but not the sodium sulfate crystallization.

Keywords: thermodynamic calculations, reactive transport, radioactive waste disposal, PHREEQC

Procedia PDF Downloads 163
9351 Analysis of Two-Echelon Supply Chain with Perishable Items under Stochastic Demand

Authors: Saeed Poormoaied

Abstract:

Perishability and developing an intelligent control policy for perishable items are the major concerns of marketing managers in a supply chain. In this study, we address a two-echelon supply chain problem for perishable items with a single vendor and a single buyer. The buyer adopts an aged-based continuous review policy which works by taking both the stock level and the aging process of items into account. The vendor works under the warehouse framework, where its lot size is determined with respect to the batch size of the buyer. The model holds for a positive and fixed lead time for the buyer, and zero lead time for the vendor. The demand follows a Poisson process and any unmet demand is lost. We provide exact analytic expressions for the operational characteristics of the system by using the renewal reward theorem. Items have a fixed lifetime after which they become unusable and are disposed of from the buyer's system. The age of items starts when they are unpacked and ready for the consumption at the buyer. When items are held by the vendor, there is no aging process which results in no perishing at the vendor's site. The model is developed under the centralized framework, which takes the expected profit of both vendor and buyer into consideration. The goal is to determine the optimal policy parameters under the service level constraint at the retailer's site. A sensitivity analysis is performed to investigate the effect of the key input parameters on the expected profit and order quantity in the supply chain. The efficiency of the proposed age-based policy is also evaluated through a numerical study. Our results show that when the unit perishing cost is negligible, a significant cost saving is achieved.

Keywords: two-echelon supply chain, perishable items, age-based policy, renewal reward theorem

Procedia PDF Downloads 144
9350 A Study on the Correlation Analysis between the Pre-Sale Competition Rate and the Apartment Unit Plan Factor through Machine Learning

Authors: Seongjun Kim, Jinwooung Kim, Sung-Ah Kim

Abstract:

The development of information and communication technology also affects human cognition and thinking, especially in the field of design, new techniques are being tried. In architecture, new design methodologies such as machine learning or data-driven design are being applied. In particular, these methodologies are used in analyzing the factors related to the value of real estate or analyzing the feasibility in the early planning stage of the apartment housing. However, since the value of apartment buildings is often determined by external factors such as location and traffic conditions, rather than the interior elements of buildings, data is rarely used in the design process. Therefore, although the technical conditions are provided, the internal elements of the apartment are difficult to apply the data-driven design in the design process of the apartment. As a result, the designers of apartment housing were forced to rely on designer experience or modular design alternatives rather than data-driven design at the design stage, resulting in a uniform arrangement of space in the apartment house. The purpose of this study is to propose a methodology to support the designers to design the apartment unit plan with high consumer preference by deriving the correlation and importance of the floor plan elements of the apartment preferred by the consumers through the machine learning and reflecting this information from the early design process. The data on the pre-sale competition rate and the elements of the floor plan are collected as data, and the correlation between pre-sale competition rate and independent variables is analyzed through machine learning. This analytical model can be used to review the apartment unit plan produced by the designer and to assist the designer. Therefore, it is possible to make a floor plan of apartment housing with high preference because it is possible to feedback apartment unit plan by using trained model when it is used in floor plan design of apartment housing.

Keywords: apartment unit plan, data-driven design, design methodology, machine learning

Procedia PDF Downloads 268
9349 Adoption of Climate-Smart Agriculture Practices Among Farmers and Its Effect on Crop Revenue in Ethiopia

Authors: Fikiru Temesgen Gelata

Abstract:

Food security, adaptation, and climate change mitigation are all problems that can be resolved simultaneously with Climate-Smart Agriculture (CSA). This study examines determinants of climate-smart agriculture (CSA) practices among smallholder farmers, aiming to understand the factors guiding adoption decisions and evaluate the impact of CSA on smallholder farmer income in the study areas. For this study, three-stage sampling techniques were applied to select 230 smallholders randomly. Mann-Kendal test and multinomial endogenous switching regression model were used to analyze trends of decrease or increase within long-term temporal data and the impact of CSA on the smallholder farmer income, respectively. Findings revealed education level, household size, land ownership, off-farm income, climate information, and contact with extension agents found to be highly adopted CSA practices. On the contrary, erosion exerted a detrimental impact on all the agricultural practices examined within the study region. Various factors such as farming methods, the size of farms, proximity to irrigated farmlands, availability of extension services, distance to market hubs, and access to weather forecasts were recognized as key determinants influencing the adoption of CSA practices. The multinomial endogenous switching regression model (MESR) revealed that joint adoption of crop rotation and soil and water conservation practices significantly increased farm income by 1,107,245 ETB. The study recommends that counties and governments should prioritize addressing climate change in their development agendas to increase the adoption of climate-smart farming techniques.

Keywords: climate-smart practices, food security, Oincome, MERM, Ethiopia

Procedia PDF Downloads 38
9348 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow

Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen

Abstract:

Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.

Keywords: flux limiters, implicit SGS, MILES, OpenFOAM, turbulence statistics

Procedia PDF Downloads 190
9347 Numerical Prediction of Width Crack of Concrete Dapped-End Beams

Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo

Abstract:

Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.

Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis

Procedia PDF Downloads 167
9346 How Message Framing and Temporal Distance Affect Word of Mouth

Authors: Camille Lacan, Pierre Desmet

Abstract:

In the crowdfunding model, a campaign succeeds by collecting the funds required over a predefined duration. The success of a CF campaign depends both on the capacity to attract members of the online communities concerned, and on the community members’ involvement in online word-of-mouth recommendations. To maximize the campaign's success probability, project creators (i.e., an organization appealing for financial resources) send messages to contributors to ask them to issue word of mouth. Internet users relay information about projects through Word of Mouth which is defined as “a critical tool for facilitating information diffusion throughout online communities”. The effectiveness of these messages depends on the message framing and the time at which they are sent to contributors (i.e., at the start of the campaign or close to the deadline). This article addresses the following question: What are the effect of message framing and temporal distance on the willingness to share word of mouth? Drawing on Perspectives Theory and Construal Level Theory, this study examines the interplay between message framing (Gains vs. Losses) and temporal distance (message while the deadline is coming vs. far) on intention to share word of mouth. A between-subject experimental design is conducted to test the research model. Results show significant differences between a loss-framed message (lack of benefits if the campaign fails) associated with a short deadline (ending tomorrow) compared to a gain-framed message (benefits if the campaign succeeds) associated with a distant deadline (ending in three months). However, this effect is moderated by the anticipated regret of a campaign failure and the temporal orientation. These moderating effects contribute to specifying the boundary condition of the framing effect. Handling the message framing and the temporal distance are thus the key decisions to influence the willingness to share word of mouth.

Keywords: construal levels, crowdfunding, message framing, word of mouth

Procedia PDF Downloads 252
9345 Utilization of Cervical Cancer Screening Among HIV Infected Women in Nairobi, Kenya

Authors: E. Njuguna, S. Ilovi, P. Muiruri, K. Mutai, J. Kinuthia, P. Njoroge

Abstract:

Introduction: Cervical cancer is the commonest cause of cancer-related morbidity and mortality among women in developing countries in Sub Saharan Africa. Screening for cervical cancer in all women regardless of HIV status is crucial for the early detection of cancer of the cervix when treatment is most effective in curing the disease. It is particularly more important to screen HIV infected women as they are more at risk of developing the disease and progressing faster once infected with HPV (Human Papilloma Virus). We aimed to determine the factors affecting the utilization of cervical cancer screenings among HIV infected women above 18 years of age at Kenyatta National Hospital (KNH) Comprehensive Care Center (CCC). Materials and Methods: A cross-sectional mixed quantitative and qualitative study involving randomly and purposefully selected HIV positive female respectively was conducted. Qualitative data collection involved 4 focus group discussions of eligible female participants while quantitative data were acquired by one to one interviewer administered structured questionnaires. The outcome variable was the utilization of cervical cancer screening. Data were entered into Access data base and analyzed using Stata version 11.1. Qualitative data were analyzed after coding for significant clauses and transcribing to determine themes arising. Results: We enrolled a total of 387 patients, mean age (IQ range) 40 years (36-44). Cervical cancer screening utilization was 46% despite a health care provider recommendation of 85%. The screening results were reported as normal in 72 of 81 (88.9%) and abnormal 7 of 81(8.6%) of the cases. Those who did not know their result were 2 of 81(2.5%). Patients were less likely to utilize the service with increasing number of years attending the clinic (OR 0.9, 95% CI 0.86-0.99, p-value 0.02), but more likely to utilize the service if recommendation by a staff was made (OR 10, 95% CI 4.2-23.9, p<0.001), and if cervical screening had been done before joining KNH CCC (OR 2.9, 95% CI 1.7-4.9, p < 0.001). Similarly, they were more likely to rate the services on cervical cancer screening as good (OR 5.0, 95% CI 1.7-3.4, p <0.001) and very good (OR 8.1, 95% CI 2.5-6.1, p<0.001) if they had utilized the service. The main barrier themes emerging from qualitative data included fear of screening due to excessive pain or bleeding, lack of proper communication on screening procedures and increased waiting time. Conclusions: Utilization of cervical cancer screening services was low despite health care recommendation. Patient socio-demographic characteristics did not influence whether or not they utilized the services, indicating the important role of the health care provider in the referral and provision of the service.

Keywords: cervical, cancer, HIV, women, comprehensive care center

Procedia PDF Downloads 275
9344 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 232
9343 Looking beyond Corporate Social Responsibility to Sustainable Development: Conceptualisation and Theoretical Exploration

Authors: Mercy E. Makpor

Abstract:

Traditional Corporate Social Responsibility (CSR) idea has gone beyond just ensuring safety environments, caring about global warming and ensuring good living standards and conditions for the society at large. The paradigm shift is towards a focus on strategic objectives and the long-term value creation for both businesses and the society at large for a realistic future. As an important approach to solving social and environment issues, CSR has been accepted globally. Yet the approach is expected to go beyond where it is currently. So much is expected from businesses and governments at every level globally and locally. This then leads to the original idea of the concept, that is, how it originated and how it has been perceived over the years. Little wonder there has been a lot of definitions surrounding the concept without a major globally acceptable definition of it. The definition of CSR given by the European Commission will be considered for the purpose of this paper. Sustainable Development (SD), on the other hand, has been viewed in recent years as an ethical concept explained in the UN-Report termed “Our Common Future,” which can also be referred to as the Brundtland report. The report summarises the need for SD to take place in the present without comprising the future. However, the recent 21st-century framework on sustainability known as the “Triple Bottom Line (TBL)” framework, has added its voice to the concepts of CSR and sustainable development. The TBL model is of the opinion that businesses should not only report on their financial performance but also on their social and environmental performances, highlighting that CSR has gone beyond just the “material-impact” approach towards a “Future-Oriented” approach (sustainability). In this paper, the concept of CSR is revisited by exploring the various theories therein. The discourse on the concepts of sustainable development and sustainable development frameworks will also be indicated, thereby inducing these into how CSR can benefit both businesses and their stakeholders as well as the entirety of the society, not just for the present but for the future. It does this by exploring the importance of both concepts (CSR and SD) and concludes by making recommendations for a more empirical research in the near future.

Keywords: corporate social responsibility, sustainable development, sustainability, triple bottom line model

Procedia PDF Downloads 245
9342 Bioavailability of Zinc to Wheat Grown in the Calcareous Soils of Iraqi Kurdistan

Authors: Muhammed Saeed Rasheed

Abstract:

Knowledge of the zinc and phytic acid (PA) concentrations of staple cereal crops are essential when evaluating the nutritional health of national and regional populations. In the present study, a total of 120 farmers’ fields in Iraqi Kurdistan were surveyed for zinc status in soil and wheat grain samples; wheat is the staple carbohydrate source in the region. Soils were analysed for total concentrations of phosphorus (PT) and zinc (ZnT), available P (POlsen) and Zn (ZnDTPA) and for pH. Average values (mg kg-1) ranged between 403-3740 (PT), 42.0-203 (ZnT), 2.13-28.1 (POlsen) and 0.14-5.23 (ZnDTPA); pH was in the range 7.46-8.67. The concentrations of Zn, PA/Zn molar ratio and estimated Zn bioavailability were also determined in wheat grain. The ranges of Zn and PA concentrations (mg kg⁻¹) were 12.3-63.2 and 5400 – 9300, respectively, giving a PA/Zn molar ratio of 15.7-30.6. A trivariate model was used to estimate intake of bioaccessible Zn, employing the following parameter values: (i) maximum Zn absorption = 0.09 (AMAX), (ii) equilibrium dissociation constant of zinc-receptor binding reaction = 0.680 (KP), and (iii) equilibrium dissociation constant of Zn-PA binding reaction = 0.033 (KR). In the model, total daily absorbed Zn (TAZ) (mg d⁻¹) as a function of total daily nutritional PA (mmole d⁻¹) and total daily nutritional Zn (mmole Zn d⁻¹) was estimated assuming an average wheat flour consumption of 300 g day⁻¹ in the region. Consideration of the PA and Zn intake suggest only 21.5±2.9% of grain Zn is bioavailable so that the effective Zn intake from wheat is only 1.84-2.63 mg d-1 for the local population. Overall results suggest available dietary Zn is below recommended levels (11 mg d⁻¹), partly due to low uptake by wheat but also due to the presence of large concentrations of PA in wheat grains. A crop breeding program combined with enhanced agronomic management methods is needed to enhance both Zn uptake and bioavailability in grains of cultivated wheat types.

Keywords: phosphorus, zinc, phytic acid, phytic acid to zinc molar ratio, zinc bioavailability

Procedia PDF Downloads 123
9341 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 218