Search results for: variable refrigerant flow heat pump
740 The Influence of Firm Characteristics on Profitability: Evidence from Italian Hospitality Industry
Authors: Elisa Menicucci, Guido Paolucci
Abstract:
Purpose: The aim of this paper is to investigate the factors influencing profitability in the Italian hospitality industry during the period 2008-2016. Design/methodology/approach: This study examines the profitability and its determinants using a sample of 2366 Italian hotel firms. First, we use a multidimensional measure of profitability including attributes as return on equity, return on assets and occupancy rate. Second, we examine variables that are potentially related with performance and we sort these into five categories: market variables, business model, ownership structure, management education and control variables. Findings: The results show that financial crisis, business model and ownership structure influence profitability of hotel firms. Specific factors such as the internationalization, location, firm’s declaring accommodation as their primary activity and chain affiliation are associated positively with profitability. We also find that larger hotel firms have higher performance rankings, while hotels with higher operating cash flow volatility, greater sales volatility and a higher occurrence of losses have lower profitability. Research limitations/implications: Findings suggest the importance of considering firm specific factors to evaluate the profitability of a hotel firm. Results also provide evidence for academics to critically evaluate factors that would ensure profitability of hotels in developed countries such as Italy. Practical implications: This investigation offers valuable information and strategic implications for government, tourism policymakers, tourist hotel owners, hoteliers and tourism managers in their decision-making. Originality/value: This paper provides interesting insights into the characteristics and practices of profitable hotels in Italy. Few econometric studies empirically explored the determinants of performance in the European hospitality field so far. Therefore, this paper tries to close an important gap in the existing literature improving the understanding of profitability in the Italian hospitality industry.Keywords: hotel firms, profitability, determinants, Italian hospitality industry
Procedia PDF Downloads 403739 Evaluation of Sustained Improvement in Trauma Education Approaches for the College of Emergency Nursing Australasia Trauma Nursing Program
Authors: Pauline Calleja, Brooke Alexander
Abstract:
In 2010 the College of Emergency Nursing Australasia (CENA) undertook sole administration of the Trauma Nursing Program (TNP) across Australia. The original TNP was developed from recommendations by the Review of Trauma and Emergency Services-Victoria. While participant and faculty feedback about the program was positive, issues were identified that were common for industry training programs in Australia. These issues included didactic approaches, with many lectures and little interaction/activity for participants. Participants were not necessarily encouraged to undertake deep learning due to the teaching and learning principles underpinning the course, and thus participants described having to learn by rote, and only gain a surface understanding of principles that were not always applied to their working context. In Australia, a trauma or emergency nurse may work in variable contexts that impact on practice, especially where resources influence scope and capacity of hospitals to provide trauma care. In 2011, a program review was undertaken resulting in major changes to the curriculum, teaching, learning and assessment approaches. The aim was to improve learning including a greater emphasis on pre-program preparation for participants, the learning environment and clinically applicable contextualized outcomes participants experienced. Previously if participants wished to undertake assessment, they were given a take home examination. The assessment had poor uptake and return, and provided no rigor since assessment was not invigilated. A new assessment structure was enacted with an invigilated examination during course hours. These changes were implemented in early 2012 with great improvement in both faculty and participant satisfaction. This presentation reports on a comparison of participant evaluations collected from courses post implementation in 2012 and in 2015 to evaluate if positive changes were sustained. Methods: Descriptive statistics were applied in analyzing evaluations. Since all questions had more than 20% of cells with a count of <5, Fisher’s Exact Test was used to identify significance (p = <0.05) between groups. Results: A total of fourteen group evaluations were included in this analysis, seven CENA TNP groups from 2012 and seven from 2015 (randomly chosen). A total of 173 participant evaluations were collated (n = 81 from 2012 and 92 from 2015). All course evaluations were anonymous, and nine of the original 14 questions were applicable for this evaluation. All questions were rated by participants on a five-point Likert scale. While all items showed improvement from 2012 to 2015, significant improvement was noted in two items. These were in regard to the content being delivered in a way that met participant learning needs and satisfaction with the length and pace of the program. Evaluation of written comments supports these results. Discussion: The aim of redeveloping the CENA TNP was to improve learning and satisfaction for participants. These results demonstrate that initial improvements in 2012 were able to be maintained and in two essential areas significantly improved. Changes that increased participant engagement, support and contextualization of course materials were essential for CENA TNP evolution.Keywords: emergency nursing education, industry training programs, teaching and learning, trauma education
Procedia PDF Downloads 276738 Longitudinal Profile of Antibody Response to SARS-CoV-2 in Patients with Covid-19 in a Setting from Sub–Saharan Africa: A Prospective Longitudinal Study
Authors: Teklay Gebrecherkos
Abstract:
Background: Serological testing for SARS-CoV-2 plays an important role in epidemiological studies, in aiding the diagnosis of COVID-19 and assess vaccine responses. Little is known about the dynamics of SARS-CoV-2 serology in African settings. Here, we aimed to characterize the longitudinal antibody response profile to SARS-CoV-2 in Ethiopia. Methods: In this prospective study, a total of 102 PCR-confirmed COVID-19 patients were enrolled. We obtained 802 plasma samples collected serially. SARS-CoV-2 antibodies were determined using four lateral flow immune assays (LFIAs) and an electrochemiluminescent immunoassay. We determined longitudinal antibody response to SARS-CoV-2 as well as seroconversion dynamics. Results: Serological positivity rate ranged between 12%-91%, depending on timing after symptom onset. There was no difference in the positivity rate between severe and non-severe COVID-19 cases. The specificity ranged between 90%-97%. Agreement between different assays ranged between 84%-92%. The estimated positive predictive value (PPV) for IgM or IgG in a scenario with seroprevalence at 5% varies from 33% to 58%. Nonetheless, when the population seroprevalence increases to 25% and 50%, there is a corresponding increase in the estimated PPVs. The estimated negative-predictive value (NPV) in a low seroprevalence scenario (5%) is high (>99%). However, the estimated NPV in a high seroprevalence scenario (50%) for IgM or IgG is reduced significantly from 80% to 85%. Overall, 28/102 (27.5%) seroconverted by one or more assays tested within a median time of 11 (IQR: 9–15) days post symptom onset. The median seroconversion time among symptomatic cases tended to be shorter when compared to asymptomatic patients [9 (IQR: 6–11) vs. 15 (IQR: 13–21) days; p = 0.002]. Overall, seroconversion reached 100% 5.5 weeks after the onset of symptoms. Notably, of the remaining 74 COVID-19 patients included in the cohort, 64 (62.8%) were positive for antibodies at the time of enrollment, and 10 (9.8%) patients failed to mount a detectable antibody response by any of the assays tested during follow-up. Conclusions: Longitudinal assessment of antibody response in African COVID-19 patients revealed heterogeneous responses. This underscores the need for a comprehensive evaluation of serum assays before implementation. Factors associated with failure to seroconvert need further research.Keywords: COVID-19, antibody, rapid diagnostic tests, ethiopia
Procedia PDF Downloads 86737 Role of Autophagic Lysosome Reformation for Cell Viability in an in vitro Infection Model
Authors: Muhammad Awais Afzal, Lorena Tuchscherr De Hauschopp, Christian Hübner
Abstract:
Introduction: Autophagy is an evolutionarily conserved lysosome-dependent degradation pathway, which can be induced by extrinsic and intrinsic stressors in living systems to adapt to fluctuating environmental conditions. In the context of inflammatory stress, autophagy contributes to the elimination of invading pathogens, the regulation of innate and adaptive immune mechanisms, and regulation of inflammasome activity as well as tissue damage repair. Lysosomes can be recycled from autolysosomes by the process of autophagic lysosome reformation (ALR), which depends on the presence of several proteins including Spatacsin. Thus ALR contributes to the replenishment of lysosomes that are available for fusion with autophagosomes in situations of increased autophagic turnover, e.g., during bacterial infections, inflammatory stress or sepsis. Objectives: We aimed to assess whether ALR plays a role for cell survival in an in-vitro bacterial infection model. Methods: Mouse embryonic fibroblasts (MEFs) were isolated from wild-type mice and Spatacsin (Spg11-/-) knockout mice. Wild-type MEFs and Spg11-/- MEFs were infected with Staphylococcus aureus (multiplication of infection (MOI) used was 10). After 8 and 16 hours of infection, cell viability was assessed on BD flow cytometer through propidium iodide intake. Bacterial intake by cells was also calculated by plating cell lysates on blood agar plates. Results: in-vitro infection of MEFs with Staphylococcus aureus showed a marked decrease of cell viability in ALR deficient Spatacsin knockout (Spg11-/-) MEFs after 16 hours of infection as compared to wild-type MEFs (n=3 independent experiments; p < 0.0001) although no difference was observed for bacterial intake by both genotypes. Conclusion: Suggesting that ALR is important for the defense of invading pathogens e.g. S. aureus, we observed a marked increase of cell death in an in-vitro infection model in cells with compromised ALR.Keywords: autophagy, autophagic lysosome reformation, bacterial infections, Staphylococcus aureus
Procedia PDF Downloads 147736 Assessment of Soil Quality Indicators in Rice Soil of Tamil Nadu
Authors: Kaleeswari R. K., Seevagan L .
Abstract:
Soil quality in an agroecosystem is influenced by the cropping system, water and soil fertility management. A valid soil quality index would help to assess the soil and crop management practices for desired productivity and soil health. The soil quality indices also provide an early indication of soil degradation and needy remedial and rehabilitation measures. Imbalanced fertilization and inadequate organic carbon dynamics deteriorate soil quality in an intensive cropping system. The rice soil ecosystem is different from other arable systems since rice is grown under submergence, which requires a different set of key soil attributes for enhancing soil quality and productivity. Assessment of the soil quality index involves indicator selection, indicator scoring and comprehensive score into one index. The most appropriate indicator to evaluate soil quality can be selected by establishing the minimum data set, which can be screened by linear and multiple regression factor analysis and score function. This investigation was carried out in intensive rice cultivating regions (having >1.0 lakh hectares) of Tamil Nadu viz., Thanjavur, Thiruvarur, Nagapattinam, Villupuram, Thiruvannamalai, Cuddalore and Ramanathapuram districts. In each district, intensive rice growing block was identified. In each block, two sampling grids (10 x 10 sq.km) were used with a sampling depth of 10 – 15 cm. Using GIS coordinates, and soil sampling was carried out at various locations in the study area. The number of soil sampling points were 41, 28, 28, 32, 37, 29 and 29 in Thanjavur, Thiruvarur, Nagapattinam, Cuddalore, Villupuram, Thiruvannamalai and Ramanathapuram districts, respectively. Principal Component Analysis is a data reduction tool to select some of the potential indicators. Principal Component is a linear combination of different variables that represents the maximum variance of the dataset. Principal Component that has eigenvalues equal or higher than 1.0 was taken as the minimum data set. Principal Component Analysis was used to select the representative soil quality indicators in rice soils based on factor loading values and contribution percent values. Variables having significant differences within the production system were used for the preparation of the minimum data set. Each Principal Component explained a certain amount of variation (%) in the total dataset. This percentage provided the weight for variables. The final Principal Component Analysis based soil quality equation is SQI = ∑ i=1 (W ᵢ x S ᵢ); where S- score for the subscripted variable; W-weighing factor derived from PCA. Higher index scores meant better soil quality. Soil respiration, Soil available Nitrogen and Potentially Mineralizable Nitrogen were assessed as soil quality indicators in rice soil of the Cauvery Delta zone covering Thanjavur, Thiruvavur and Nagapattinam districts. Soil available phosphorus could be used as a soil quality indicator of rice soils in the Cuddalore district. In rain-fed rice ecosystems of coastal sandy soil, DTPA – Zn could be used as an effective soil quality indicator. Among the soil parameters selected from Principal Component Analysis, Microbial Biomass Nitrogen could be used quality indicator for rice soils of the Villupuram district. Cauvery Delta zone has better SQI as compared with other intensive rice growing zone of Tamil Nadu.Keywords: soil quality index, soil attributes, soil mapping, and rice soil
Procedia PDF Downloads 90735 Psychometric Properties of Several New Positive Psychology Measures
Authors: Lauren Benyo Linford, Jared Warren, Jeremy Bekker, Gus Salazar
Abstract:
In order to accurately identify areas needing improvement and track growth, the availability of valid and reliable measures of different facets of well-being is vital. Because no specific measures currently exist for many facets of well-being, the purpose of this study was to construct and validate measures of the following constructs: Purpose, Values, Mindfulness, Savoring, Gratitude, Optimism, Supportive Relationships, Interconnectedness, Compassion, Community, Contribution, Engaged Living, Personal Growth, Flow Experiences, Self-Compassion, Exercise, Meditation, and an overall measure of subjective well-being—the Survey on Flourishing. In order to assess their psychometric properties, each measure was examined for internal consistency estimates, and items with poor item-test correlations were dropped. Additionally, the convergent validity of the Survey on Flourishing (SURF) was assessed. Total score correlations of SURF and other commonly used measures of well-being such as the Positive and Negative Affect Schedule (PANAS), The Satisfaction with Life Scale (SWLS), the PERMA Profiler (measure of Positive Emotion, Engagement, Relationships, Meaning, and Achievement) were examined to establish convergent validity. The Kessler Psychological distress scale (K6) was also included to determine the divergent validity of the SURF measure. Three week test-retest reliability was also assessed for the SURF measure. Additionally, normative data from general population samples was collected for both the Self-Compassion and Survey on Flourishing (SURF) measures. The purpose of this study is to introduce each of these measures, divulge the psychometric findings of this study, as well as explore additional psychometric properties of the SURF measure in particular. This study will highlight how these measures can be used in future research exploring these positive psychology constructs. Additionally, this study will discuss the utility of these measures to guide individuals in their use of the online self-directed, self-administered My Best Self 101 positive psychology resources developed by the researchers. The goal of My Best Self 101 is to disseminate real, research-based measures and tools to individuals who are seeking to increase their well-being.Keywords: measurement, psychometrics, test validation, well-Being
Procedia PDF Downloads 190734 Immune Modulation and Cytomegalovirus Reactivation in Sepsis-Induced Immunosuppression
Authors: G. Lambe, D. Mansukhani, A. Shetty, S. Khodaiji, C. Rodrigues, F. Kapadia
Abstract:
Introduction: Sepsis is known to cause impairment of both innate and adaptive immunity and involves an early uncontrolled inflammatory response, followed by a protracting immunosuppression phase, which includes decreased expression of cell receptors, T cell anergy and exhaustion, impaired cytokine production, which may cause high risk for secondary infections due to reduced response to antigens. Although human cytomegalovirus (CMV) is widely recognized as a serious viral pathogen in sepsis and immunocompromised patients, the incidence of CMV reactivation in patients with sepsis lacking strong evidence of immunosuppression is not well defined. Therefore, it is important to determine an association between CMV reactivation and sepsis-induced immunosuppression. Aim: To determine the association between incidence of CMV reactivation and immune modulation in sepsis-induced immunosuppression with time. Material and Methods: Ten CMV-seropositive adult patients with severe sepsis were included in this study. Blood samples were collected on Day 0, and further weekly up to 21 days. CMV load was quantified by real-time PCR using plasma. The expression of immunosuppression markers, namely, HLA-DR, PD-1, and regulatory T cells, were determined by flow cytometry using whole blood. Results: At Day 0, no CMV reactivation was observed in 6/10 patients. In these patients, the median length for reactivation was 14 days (range, 7-14 days). The remaining four patients, at Day 0, had a mean viral load of 1802+2599 copies/ml, which increased with time. At Day 21, the mean viral load for all 10 patients was 60949+179700 copies/ml, indicating that viremia increased with the length of stay in the hospital. HLA-DR expression on monocytes significantly increased from Day 0 to Day 7 (p = 0.001), following which no significant change was observed until Day 21, for all patients except 3. In these three patients, HLA-DR expression on monocytes showed a decrease at elevated viral load (>5000 copies/ml), indicating immune suppression. However, the other markers, PD-1 and regulatory T cells, did not show any significant changes. Conclusion: These preliminary findings suggest that CMV reactivation can occur in patients with severe sepsis. In fact, the viral load continued to increase with the length of stay in the hospital. Immune suppression, indicated by decreased expression of HLA-DR alone, was observed in three patients with elevated viral load.Keywords: CMV reactivation, immune suppression, sepsis immune modulation, CMV viral load
Procedia PDF Downloads 152733 A New Approach for Preparation of Super Absorbent Polymers: In-Situ Surface Cross-Linking
Authors: Reyhan Özdoğan, Mithat Çelebi, Özgür Ceylan, Mehmet Arif Kaya
Abstract:
Super absorbent polymers (SAPs) are defined as materials that can absorb huge amount of water or aqueous solution in comparison to their own mass and retain in their lightly cross-linked structure. SAPs were produced from water soluble monomers via polymerization subsequently controlled crosslinking. SAPs are generally used for water absorbing applications such as baby diapers, patient or elder pads and other hygienic product industries. Crosslinking density (CD) of SAP structure is an essential factor for water absortion capacity (WAC). Low internal CD leads to high WAC values and vice versa. However, SAPs have low CD and high swelling capacities and tend to disintegrate when pressure is applied upon them, so SAPs under load cannot absorb liquids effectively. In order to prevent this undesired situation and to obtain suitable SAP structures having high swelling capacity and ability to work under load, surface crosslinking can be the answer. In industry, these superabsorbent gels are mostly produced via solution polymerization and then they need to be dried, grinded, sized, post polymerized and finally surface croslinked (involves spraying of a crosslinking solution onto dried and grinded SAP particles, and then curing by heat). It can easily be seen that these steps are time consuming and should be handled carefully for the desired final product. If we could synthesize desired final SAPs using less processes it will help reducing time and production costs which are very important for any industries. In this study, synthesis of SAPs were achieved successfully by inverse suspension (Pickering type) polymerization and subsequently in-situ surface cross-linking via using proper surfactants in high boiling point solvents. Our one-pot synthesis of surface cross-linked SAPs invovles only one-step for preparation, thus it can be said that this technique exhibits more preferable characteristic for the industry in comparison to conventional methods due to its one-step easy process. Effects of different surface crosslinking agents onto properties of poly(acrylic acid-co-sodium acrylate) based SAPs are investigated. Surface crosslink degrees are evaluated by swelling under load (SUL) test. It was determined water absorption capacities of obtained SAPs decrease with the increasing surface crosslink density while their mechanic properties are improved.Keywords: inverse suspension polymerization, polyacrylic acid, super absorbent polymers (SAPs), surface crosslinking, sodium polyacrylate
Procedia PDF Downloads 326732 Phonological Processing and Its Role in Pseudo-Word Decoding in Children Learning to Read Kannada Language between 5.6 to 8.6 Years
Authors: Vangmayee. V. Subban, Somashekara H. S, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: Phonological processing is critical in learning to read alphabetical and non-alphabetical languages. However, its role in learning to read Kannada an alphasyllabary is equivocal. The literature has focused on the developmental role of phonological awareness on reading. To the best of authors knowledge, the role of phonological memory and phonological naming has not been addressed in alphasyllabary Kannada language. Therefore, there is a need to evaluate the comprehensive role of the phonological processing skills in Kannada on word decoding skills during the early years of schooling. Aim and Objectives: The present study aimed to explore the phonological processing abilities and their role in learning to decode pseudowords in children learning to read the Kannada language during initial years of formal schooling between 5.6 to 8.6 years. Method: In this cross sectional study, 60 typically developing Kannada speaking children, 20 each from Grade I, Grade II, and Grade III between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. Phonological processing abilities were assessed using an assessment tool specifically developed to address the objectives of the present research. The assessment tool was content validated by subject experts and had good inter and intra-subject reliability. Phonological awareness was assessed at syllable level using syllable segmentation, blending, and syllable stripping at initial, medial and final position. Phonological memory was assessed using pseudoword repetition task and phonological naming was assessed using rapid automatized naming of objects. Both phonological awareneness and phonological memory measures were scored for the accuracy of the response, whereas Rapid Automatized Naming (RAN) was scored for total naming speed. Results: The mean scores comparison using one-way ANOVA revealed a significant difference (p ≤ 0.05) between the groups on all the measures of phonological awareness, pseudoword repetition, rapid automatized naming, and pseudoword reading. Subsequent post-hoc grade wise comparison using Bonferroni test revealed significant differences (p ≤ 0.05) between each of the grades for all the tasks except (p ≥ 0.05) for syllable blending, syllable stripping, and pseudoword repetition between Grade II and Grade III. The Pearson correlations revealed a highly significant positive correlation (p=0.000) between all the variables except phonological naming which had significant negative correlations. However, the correlation co-efficient was higher for phonological awareness measures compared to others. Hence, phonological awareness was chosen a first independent variable to enter in the hierarchical regression equation followed by rapid automatized naming and finally, pseudoword repetition. The regression analysis revealed syllable awareness as a single most significant predictor of pseudoword reading by explaining the unique variance of 74% and there was no significant change in R² when RAN and pseudoword repetition were added subsequently to the regression equation. Conclusion: Present study concluded that syllable awareness matures completely by Grade II, whereas the phonological memory and phonological naming continue to develop beyond Grade III. Amongst phonological processing skills, phonological awareness, especially syllable awareness is crucial for word decoding than phonological memory and naming during initial years of schooling.Keywords: phonological awareness, phonological memory, phonological naming, phonological processing, pseudo-word decoding
Procedia PDF Downloads 177731 Mitigating Nitrous Oxide Production from Nitritation/Denitritation: Treatment of Centrate from Pig Manure Co-Digestion as a Model
Authors: Lai Peng, Cristina Pintucci, Dries Seuntjens, José Carvajal-Arroyo, Siegfried Vlaeminck
Abstract:
Economic incentives drive the implementation of short-cut nitrogen removal processes such as nitritation/denitritation (Nit/DNit) to manage nitrogen in waste streams devoid of biodegradable organic carbon. However, as any biological nitrogen removal process, the potent greenhouse gas nitrous oxide (N2O) could be emitted from Nit/DNit. Challenges remain in understanding the fundamental mechanisms and development of engineered mitigation strategies for N2O production. To provide answers, this work focuses on manure as a model, the biggest wasted nitrogen mass flow through our economies. A sequencing batch reactor (SBR; 4.5 L) was used treating the centrate (centrifuge supernatant; 2.0 ± 0.11 g N/L of ammonium) from an anaerobic digester processing mainly pig manure, supplemented with a co-substrate. Glycerin was used as external carbon source, a by-product of vegetable oil. Out-selection of nitrite oxidizing bacteria (NOB) was targeted using a combination of low dissolved oxygen (DO) levels (down to 0.5 mg O2/L), high temperature (35ºC) and relatively high free ammonia (FA) (initially 10 mg NH3-N/L). After reaching steady state, the process was able to remove 100% of ammonium with minimum nitrite and nitrate in the effluent, at a reasonably high nitrogen loading rate (0.4 g N/L/d). Substantial N2O emissions (over 15% of the nitrogen loading) were observed at the baseline operational condition, which were even increased under nitrite accumulation and a low organic carbon to nitrogen ratio. Yet, higher DO (~2.2 mg O2/L) lowered aerobic N2O emissions and weakened the dependency of N2O on nitrite concentration, suggesting a shift of N2O production pathway at elevated DO levels. Limiting the greenhouse gas emissions (environmental protection) from such a system could be substantially minimized by increasing the external carbon dosage (a cost factor), but also through the implementation of an intermittent aeration and feeding strategy. Promising steps forward have been presented in this abstract, yet at the conference the insights of ongoing experiments will also be shared.Keywords: mitigation, nitrous oxide, nitritation/denitritation, pig manure
Procedia PDF Downloads 252730 Sorghum Polyphenols Encapsulated by Spray Drying, Using Modified Starches as Wall Materials
Authors: Adriana Garcia G., Alberto A. Escobar P., Amira D. Calvo L., Gabriel Lizama U., Alejandro Zepeda P., Fernando Martínez B., Susana Rincón A.
Abstract:
Different studies have recently been focused on the use of antioxidants such as polyphenols because of to its anticarcinogenic capacity. However, these compounds are highly sensible to environmental factors such as light and heat, so lose its long-term stability, besides possess an astringent and bitter taste. Nevertheless, the polyphenols can be protected by microcapsule formulation. In this sense, a rich source of polyphenols is sorghum, besides presenting a high starch content. Due to the above, the aim of this work was to obtain modified starches from sorghum by extrusion to encapsulate polyphenols the sorghum by spray drying. Polyphenols were extracted by ethanol solution from sorghum (Pajarero/red) and determined by the method of Folin-Ciocalteu, obtaining GAE at 30 mg/g. Moreover, was extracted starch of sorghum (Sinaloense/white) through wet milling (yield 32 %). The hydrolyzed starch was modified with three treatments: acetic anhydride (2.5g/100g), sodium tripolyphosphate (4g/100g), and sodium tripolyphosphate/ acetic anhydride (2g/1.25g by each 100 g) by extrusion. Processing conditions of extrusion were as follows: barrel temperatures were of 60, 130 and 170 °C at the feeding, transition, and high-pressure extrusion zones, respectively. Analysis of Fourier Transform Infrared spectroscopy (FTIR), showed bands exhibited of acetyl groups (1735 cm-1) and phosphates (1170 cm-1, 910 cm-1 and 525 cm-1), indicating the respective modification of starch. Besides, all modified starches not developed viscosity, which is a characteristic required for use in the encapsulation of polyphenols using the spray drying technique. As result of the modification starch, was obtained a water solubility index (WSI) from 33.8 to 44.8 %, and crystallinity from 8 to 11 %, indicating the destruction of the starch granule. Afterwards, microencapsulation of polyphenols was developed by spray drying, with a blend of 10 g of modified starch, 60 ml polyphenol extract and 30 ml of distilled water. Drying conditions were as follows: inlet air temperature 150 °C ± 1, outlet air temperature 80°C ± 5. As result of the microencapsulation: were obtained yields of 56.8 to 77.4 % and an efficiency of encapsulation from 84.6 to 91.4 %. The FTIR analysis showed evidence of microcapsules loaded with polyphenols in bands 1042 cm-1, 1038 cm-1 and 1148 cm-1. Analysis Differential scanning calorimetry (DSC) showed transition temperatures from 144.1 to 173.9 °C. For the order hand, analysis of Scanning Electron Microscopy (SEM), were observed rounded surfaces with concavities, typical feature of microcapsules produced by spray drying, how result of rapid evaporation of water. Finally, the modified starches were obtained by extrusion with good characteristics for use as cover materials by spray drying, where the phosphorylated starch was the best treatment in this work, according to the encapsulation yield, efficiency, and transition temperature.Keywords: encapsulation, extrusion, modified starch, polyphenols, spray drying
Procedia PDF Downloads 312729 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator
Procedia PDF Downloads 100728 On the Lithology of Paleocene-Lower Eocene Deposits of the Achara-Trialeti Fold Zone: The Lesser Caucasus
Authors: Nino Kobakhidze, Endi Varsimashvili, Davit Makadze
Abstract:
The Caucasus is a link of the Alpine-Himalayan fold belt and involves the Greater Caucasus and the Lesser Caucasus fold systems and the Intermountain area. The study object is located within the northernmost part of the Lesser Caucasus orogen, in the eastern part of Achara-Trialeti fold -thrust belt. This area was rather well surveyed in 70th of the twentieth century in terms of oil-and-gas potential, but to our best knowledge, detailed sedimentological studies have not been conducted so far. In order to fill this gap, the authors of the present thesis started research in this direction. One of the objects selected for the research was the deposits of the Kavtura river valley situated on the northern slope of the Trialeti ridge. Paleocene-Lower Eocene deposits known in scientific literature as ‘Borjomi Flysch’ (Turbidites) are exposed in the mentioned area. During the research, the following methodologies were applied: selection of key cross sections, a collection of rock samples, microscopic description of thin sections, mineralogical and petrological analysis of material and identification of trace fossils. The study of Paleocene-Lower Eocene deposits starts with Kavtura river valley in the east, where they are well characterized by microfauna. The cross-section of the deposits starts with Danian variegated marlstone conformably overlain by the alternation of thick and thin-bedded sandstones (thickness 40-50 cm). They are continued with interbedded of thin-bedded sandstones and shales(thickness 4-5 m). On the sole surface of sandstones ichnogenera ‘Helmintopsis’ and ‘Scolicia’ are recorded and within the bed –‘Chondrites’ is found. Towards the Riverhead, there is a 1-2 m gap in sedimentation; then again the Paleocene-Lower Eocene sediments crop out. They starting with alternation of grey-green medium-grained sandstones and shales enclosing dark color plant detritus. They are overlain by the interbedded of calcareous sandstones and marls, where the thickness of sandstones is variable (20-70 cm). Ichnogenus – ‘Scolicia’ is found here. Upwards the above-mentioned deposits pass into Middle Eocenian volcanogenic-sedimentary suits. In the Kavtura river valley, the thickness of the Paleocene-Lower Eocene deposits is 300-400 m. In the process of research, the following activities are conducted: the facial analysis of host rocks, correlation of the study section with other cross sections and interpretation of depositional environment of the area. In the area the authors have found and described ichnogenera; their preliminary determination have shown that they belong to pre-depositional (‘Helmintopsis’) and post-depositional (‘Chondrites’) forms. As known, during the Cretaceous-Paleogene time, the Achara-Trialeti fold-thrust belt extensional basin was the accumulation area with great thicknesses (from shallow to deep marine sediments). It is confirmed once more by the authors investigations preliminary results of paleoichnological studies inclusive.Keywords: flysh deposits, lithology, The Lesser Caucasus, trace fossils
Procedia PDF Downloads 168727 Seamounts and Submarine Landslides: Study Case of Island Arcs Area in North of Sulawesi
Authors: Muhammad Arif Rahman, Gamma Abdul Jabbar, Enggar Handra Pangestu, Alfi Syahrin Qadri, Iryan Anugrah Putra, Rizqi Ramadhandi.
Abstract:
Indonesia lies above three major tectonic plates, Indo-Australia plate, Eurasia plate, and Pacific plate. Interactions between those plates resulted in high tectonic and volcanic activities that corelates into high risk of geological hazards in adjacent areas, one of the areas is in North of Sulawesi’s Islands. This case raises a problem in terms of infrastructure in order to mitigate existing infrastructure and various future infrastructures plan. One of the infrastructures that is essentials to enhance telecommunication aspect is submarine fiber optic cable, that has risk to geological hazard. This cable is essential that act as backbone in telecommunication. Damaged fiber optic cables can pose serious problem that make existing signal to be loss and have negative impact to people’s social and economic factor with also decreasing various governmental services performance. Submarine cables are facing challenges in terms of geological hazards, for instance are seamounts activity. Previous studies show that until 2023, five seamounts are identified in North of Sulawesi. Seamounts itself can damage and trigger many activities that can risks submarine cables, one of the examples is submarine landslide. Main focuses of this study are to identify new possible seamounts and submarine landslide path in area North of Sulawesi Islands to help minimize risks pose by those hazards, either to existing or future plan submarine cables. Using bathymetry data, this study conduct slope analysis and use distinctive morphological features to interpret possible seamounts. Then we mapped out valleys in between seamounts and determine where sediments might flow in case of landslide, and to finally, know how it affect submarine cables in the area.Keywords: bathymetry, geological hazard, mitigation, seamount, submarine cable, submarine landslide, volcanic activity
Procedia PDF Downloads 72726 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 165725 Rendering Cognition Based Learning in Coherence with Development within the Context of PostgreSQL
Authors: Manuela Nayantara Jeyaraj, Senuri Sucharitharathna, Chathurika Senarath, Yasanthy Kanagaraj, Indraka Udayakumara
Abstract:
PostgreSQL is an Object Relational Database Management System (ORDBMS) that has been in existence for a while. Despite the superior features that it wraps and packages to manage database and data, the database community has not fully realized the importance and advantages of PostgreSQL. Hence, this research tends to focus on provisioning a better environment of development for PostgreSQL in order to induce the utilization and elucidate the importance of PostgreSQL. PostgreSQL is also known to be the world’s most elementary SQL-compliant open source ORDBMS. But, users have not yet resolved to PostgreSQL due to the facts that it is still under the layers and the complexity of its persistent textual environment for an introductory user. Simply stating this, there is a dire need to explicate an easy way of making the users comprehend the procedure and standards with which databases are created, tables and the relationships among them, manipulating queries and their flow based on conditions in PostgreSQL to help the community resolve to PostgreSQL at an augmented rate. Hence, this research under development within the context tends to initially identify the dominant features provided by PostgreSQL over its competitors. Following the identified merits, an analysis on why the database community holds a hesitance in migrating to PostgreSQL’s environment will be carried out. These will be modulated and tailored based on the scope and the constraints discovered. The resultant of the research proposes a system that will serve as a designing platform as well as a learning tool that will provide an interactive method of learning via a visual editor mode and incorporate a textual editor for well-versed users. The study is based on conjuring viable solutions that analyze a user’s cognitive perception in comprehending human computer interfaces and the behavioural processing of design elements. By providing a visually draggable and manipulative environment to work with Postgresql databases and table queries, it is expected to highlight the elementary features displayed by Postgresql over any other existent systems in order to grasp and disseminate the importance and simplicity offered by this to a hesitant user.Keywords: cognition, database, PostgreSQL, text-editor, visual-editor
Procedia PDF Downloads 287724 Analysis of Road Network Vulnerability Due to Merapi Volcano Eruption
Authors: Imam Muthohar, Budi Hartono, Sigit Priyanto, Hardiansyah Hardiansyah
Abstract:
The eruption of Merapi Volcano in Yogyakarta, Indonesia in 2010 caused many casualties due to minimum preparedness in facing disaster. Increasing population capacity and evacuating to safe places become very important to minimize casualties. Regional government through the Regional Disaster Management Agency has divided disaster-prone areas into three parts, namely ring 1 at a distance of 10 km, ring 2 at a distance of 15 km and ring 3 at a distance of 20 km from the center of Mount Merapi. The success of the evacuation is fully supported by road network infrastructure as a way to rescue in an emergency. This research attempts to model evacuation process based on the rise of refugees in ring 1, expanded to ring 2 and finally expanded to ring 3. The model was developed using SATURN (Simulation and Assignment of Traffic to Urban Road Networks) program version 11.3. 12W, involving 140 centroid, 449 buffer nodes, and 851 links across Yogyakarta Special Region, which was aimed at making a preliminary identification of road networks considered vulnerable to disaster. An assumption made to identify vulnerability was the improvement of road network performance in the form of flow and travel times on the coverage of ring 1, ring 2, ring 3, Sleman outside the ring, Yogyakarta City, Bantul, Kulon Progo, and Gunung Kidul. The research results indicated that the performance increase in the road networks existing in the area of ring 2, ring 3, and Sleman outside the ring. The road network in ring 1 started to increase when the evacuation was expanded to ring 2 and ring 3. Meanwhile, the performance of road networks in Yogyakarta City, Bantul, Kulon Progo, and Gunung Kidul during the evacuation period simultaneously decreased in when the evacuation areas were expanded. The results of preliminary identification of the vulnerability have determined that the road networks existing in ring 1, ring 2, ring 3 and Sleman outside the ring were considered vulnerable to the evacuation of Mount Merapi eruption. Therefore, it is necessary to pay a great deal of attention in order to face the disasters that potentially occur at anytime.Keywords: model, evacuation, SATURN, vulnerability
Procedia PDF Downloads 172723 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia
Authors: Zeinu Ahmed Rabba, Derek D Stretch
Abstract:
Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase
Procedia PDF Downloads 289722 Muslims in Diaspora Negotiating Islam through Muslim Public Sphere and the Role of Media
Authors: Sabah Khan
Abstract:
The idea of universal Islam tends to exaggerate the extent of homogeneity in Islamic beliefs and practices across Muslim communities. In the age of migration, various Muslim communities are in diaspora. The immediate implication of this is what happens to Islam in diaspora? How Islam gets represented in new forms? Such pertinent questions need to be dealt with. This paper shall draw on the idea of religious transnationalism, primarily transnational Islam. There are multiple ways to conceptualize transnational phenomenon with reference to Islam in terms of flow of people, transnational organizations and networks; Ummah oriented solidarity and the new Muslim public sphere. This paper specifically deals with the new Muslim public sphere. It primarily refers to the space and networks enabled by new media and communication technologies, whereby Muslim identity and Islamic normativity are rehearsed, debated by people in different locales. A new sense of public is emerging across Muslim communities, which needs to be contextualized. This paper uses both primary and secondary data. Primary data elicited through content analysis of audio-visuals on social media and secondary sources of information ranging from books, articles, journals, etc. The basic aim of the paper is to focus on the emerging Muslim public sphere and the role of media in expanding public spheres of Islam. It also explores how Muslims in diaspora negotiate Islam and Islamic practices through media and the new Muslim public sphere. This paper cogently weaves in discussions firstly, of re-intellectualization of Islamic discourse in the public sphere. In other words, how Muslims have come to reimagine their collective identity and critically look at fundamental principles and authoritative tradition. Secondly, the emerging alternative forms of Islam by young Muslims in diaspora. In other words, how young Muslims search for unorthodox ways and media for religious articulation, including music, clothing and TV. This includes transmission and distribution of Islam in diaspora in terms of emerging ‘media Islam’ or ‘soundbite Islam’. The new Muslim public sphere has offered an arena to a large number of participants to critically engage with Islam, which leads not only to a critical engagement with traditional forms of Islamic authority but also emerging alternative forms of Islam and Islamic practices.Keywords: Islam, media, Muslims, public sphere
Procedia PDF Downloads 274721 Nature of Forest Fragmentation Owing to Human Population along Elevation Gradient in Different Countries in Hindu Kush Himalaya Mountains
Authors: Pulakesh Das, Mukunda Dev Behera, Manchiraju Sri Ramachandra Murthy
Abstract:
Large numbers of people living in and around the Hindu Kush Himalaya (HKH) region, depends on this diverse mountainous region for ecosystem services. Following the global trend, this region also experiencing rapid population growth, and demand for timber and agriculture land. The eight countries sharing the HKH region have different forest resources utilization and conservation policies that exert varying forces in the forest ecosystem. This created a variable spatial as well altitudinal gradient in rate of deforestation and corresponding forest patch fragmentation. The quantitative relationship between fragmentation and demography has not been established before for HKH vis-à-vis along elevation gradient. This current study was carried out to attribute the overall and different nature in landscape fragmentations along the altitudinal gradient with the demography of each sharing countries. We have used the tree canopy cover data derived from Landsat data to analyze the deforestation and afforestation rate, and corresponding landscape fragmentation observed during 2000 – 2010. Area-weighted mean radius of gyration (AMN radius of gyration) was computed owing to its advantage as spatial indicator of fragmentation over non-spatial fragmentation indices. Using the subtraction method, the change in fragmentation was computed during 2000 – 2010. Using the tree canopy cover data as a surrogate of forest cover, highest forest loss was observed in Myanmar followed by China, India, Bangladesh, Nepal, Pakistan, Bhutan, and Afghanistan. However, the sequence of fragmentation was different after the maximum fragmentation observed in Myanmar followed by India, China, Bangladesh, and Bhutan; whereas increase in fragmentation was seen following the sequence of as Nepal, Pakistan, and Afghanistan. Using SRTM-derived DEM, we observed higher rate of fragmentation up to 2400m that corroborated with high human population for the year 2000 and 2010. To derive the nature of fragmentation along the altitudinal gradients, the Statistica software was used, where the user defined function was utilized for regression applying the Gauss-Newton estimation method with 50 iterations. We observed overall logarithmic decrease in fragmentation change (area-weighted mean radius of gyration), forest cover loss and population growth during 2000-2010 along the elevation gradient with very high R2 values (i.e., 0.889, 0.895, 0.944 respectively). The observed negative logarithmic function with the major contribution in the initial elevation gradients suggest to gap filling afforestation in the lower altitudes to enhance the forest patch connectivity. Our finding on the pattern of forest fragmentation and human population across the elevation gradient in HKH region will have policy level implication for different nations and would help in characterizing hotspots of change. Availability of free satellite derived data products on forest cover and DEM, grid-data on demography, and utility of geospatial tools helped in quick evaluation of the forest fragmentation vis-a-vis human impact pattern along the elevation gradient in HKH.Keywords: area-weighted mean radius of gyration, fragmentation, human impact, tree canopy cover
Procedia PDF Downloads 216720 Humanity's Still Sub-Quantum Core-Self Intelligence
Authors: Andrew Shugyo Daijo Bonnici
Abstract:
Core-Self Intelligence (CSI) is an absolutely still, non-verbal, non-cerebral intelligence. Our still core-self intelligence is felt at our body's center point of gravity, just an inch below our navel, deep within our lower abdomen. The still sub-quantum depth of core-Self remains untouched by the conditioning influences of family, society, culture, religion, and spiritual views that shape our personalities and ego-self identities. As core-Self intelligence is inborn and unconditioned, it exists within all human beings regardless of age, race, color, creed, mental acuity, or national origin. Our core-self intelligence functions as a wise and compassionate guide that advances our health and well-being, our mental clarity and emotional resiliency, our fearless peace and behavioral wisdom, and our ever-deepening compassion for self and others. Although our core-Self, with its absolutely still non-judgmental intelligence, operates far beneath the functioning of our ego-self identity and our thinking mind, it effectively coexists with our passing thoughts, all of our figuring and thinking, our logical and rational way of knowing, the ebb and flow of our feelings, and the natural or triggered emergence of our emotions. When we allow our whole inner somatic awareness to gently sink into the intelligent center point of gravity within our lower abdomen, the felt arising of our core- Self’s inborn stillness has a serene and relaxing effect on our ego-self and thinking mind. It naturally slows down the speedy passage of our involuntary thoughts, diminishes our ego-self's defensive and reactive functioning, and decreases narcissistic reflections on I, me, and mine. All of these healthy cognitive benefits advance our innate wisdom and compassion, facilitate our personal and interpersonal growth, and liberate the ever-fresh wonder and curiosity of our beginner's heartmind. In conclusion, by studying, exploring, and researching our core-Self intelligence, psychologists and psychotherapists can unlock new avenues for advancing the farther reaches of our mental, emotional, and spiritual health and well-being, our innate behavioral wisdom and boundless empathy, our lucid compassion for self and others, and our unwavering confidence in the still guiding light of our core-Self that exists at the abdominal center point of all human beings.Keywords: intelligence, transpersonal, beginner’s heartmind, compassionate wisdom
Procedia PDF Downloads 69719 Effect of Fresh Concrete Curing Methods on Its Compressive Strength
Authors: Xianghe Dai, Dennis Lam, Therese Sheehan, Naveed Rehman, Jie Yang
Abstract:
Concrete is one of the most used construction materials that may be made onsite as fresh concrete and then placed in formwork to produce the desired shapes of structures. It has been recognized that the raw materials and mix proportion of concrete dominate the mechanical characteristics of hardened concrete, and the curing method and environment applied to the concrete in early stages of hardening will significantly influence the concrete properties, such as compressive strength, durability, permeability etc. In construction practice, there are various curing methods to maintain the presence of mixing water throughout the early stages of concrete hardening. They are also beneficial to concrete in hot weather conditions as they provide cooling and prevent the evaporation of water. Such methods include ponding or immersion, spraying or fogging, saturated wet covering etc. Also there are various curing methods that may be implemented to decrease the level of water lost which belongs to the concrete surface, such as putting a layer of impervious paper, plastic sheeting or membrane on the concrete to cover it. In the concrete material laboratory, accelerated strength gain methods supply the concrete with heat and additional moisture by applying live steam, coils that are subject to heating or pads that have been warmed electrically. Currently when determining the mechanical parameters of a concrete, the concrete is usually sampled from fresh concrete on site and then cured and tested in laboratories where standardized curing procedures are adopted. However, in engineering practice, curing procedures in the construction sites after the placing of concrete might be very different from the laboratory criteria, and this includes some standard curing procedures adopted in the laboratory that can’t be applied on site. Sometimes the contractor compromises the curing methods in order to reduce construction costs etc. Obviously the difference between curing procedures adopted in the laboratory and those used on construction sites might over- or under-estimate the real concrete quality. This paper presents the effect of three typical curing methods (air curing, water immersion curing, plastic film curing) and of maintaining concrete in steel moulds on the compressive strength development of normal concrete. In this study, Portland cement with 30% fly ash was used and different curing periods, 7 days, 28 days and 60 days were applied. It was found that the highest compressive strength was observed from concrete samples to which 7-day water immersion curing was applied and from samples maintained in steel moulds up to the testing date. The research results implied that concrete used as infill in steel tubular members might develop a higher strength than predicted by design assumptions based on air curing methods. Wrapping concrete with plastic film as a curing method might delay the concrete strength development in the early stages. Water immersion curing for 7 days might significantly increase the concrete compressive strength.Keywords: compressive strength, air curing, water immersion curing, plastic film curing, maintaining in steel mould, comparison
Procedia PDF Downloads 296718 Earthquake Preparedness of School Community and E-PreS Project
Authors: A. Kourou, A. Ioakeimidou, S. Hadjiefthymiades, V. Abramea
Abstract:
During the last decades, the task of engaging governments, communities and citizens to reduce risk and vulnerability of the populations has made variable progress. Experience has demonstrated that lack of awareness, education and preparedness may result in significant material and other losses both on the onset of the disaster. Schools play a vital role in the community and are important elements of values and culture of the society. A proper school education not only teaches children, but also is a key factor in the promotion of a safety culture into the wider community. In Greece School Earthquake Safety Initiative has been undertaken by Earthquake Planning and Protection Ogranization with specific actions (seminars, lectures, guidelines, educational material, campaigns, national or EU projects, drills etc.). The objective of this initiative is to develop disaster-resilient school communities through awareness, self-help, cooperation and education. School preparedness requires the participation of Principals, teachers, students, parents, and competent authorities. Preparation and earthquake readiness involves: a) learning what should be done before, during, and after earthquake; b) doing or preparing to do these things now, before the next earthquake; and c) developing teachers’ and students’ skills to cope efficiently in case of an earthquake. In the above given framework this paper presents the results of a survey aimed to identify the level of education and preparedness of school community in Greece. More specifically, the survey questionnaire investigates issues regarding earthquake protection actions, appropriate attitudes and behaviors during an earthquake and existence of contingency plans at elementary and secondary schools. The questionnaires were administered to Principals and teachers from different regions of the country that attend the EPPO national training project 'Earthquake Safety at Schools'. A closed-form questionnaire was developed for the survey, which contained questions regarding the following: a) knowledge of self protective actions b) existence of emergency planning at home and c) existence of emergency planning at school (hazard mitigation actions, evacuation plan, and performance of drills). Survey results revealed that a high percentage of teachers have taken the appropriate preparedness measures concerning non-structural hazards at schools, emergency school plan and simulation drills every year. In order to improve the action-planning for ongoing school disaster risk reduction, the implementation of earthquake drills, the involvement of students with disabilities and the evaluation of school emergency plans, EPPO participates in E-PreS project. The main objective of this project is to create smart tools which define, simulate and evaluate all hazards emergency steps customized to the unique district and school. The project comes up with a holistic methodology using real-time evaluation involving different categories of actors, districts, steps and metrics. The project is supported by EU Civil Protection Financial Instrument with a duration of two years. Coordinator is the Kapodistrian University of Athens and partners are from four countries; Greece, Italy, Romania and Bulgaria.Keywords: drills, earthquake, emergency plans, E-PreS project
Procedia PDF Downloads 238717 Index of Suitability for Culex pipiens sl. Mosquitoes in Portugal Mainland
Authors: Maria C. Proença, Maria T. Rebelo, Marília Antunes, Maria J. Alves, Hugo Osório, Sofia Cunha, REVIVE team
Abstract:
The environment of the mosquitoes complex Culex pipiens sl. in Portugal mainland is evaluated based in its abundance, using a data set georeferenced, collected during seven years (2006-2012) from May to October. The suitability of the different regions can be delineated using the relative abundance areas; the suitablility index is directly proportional to disease transmission risk and allows focusing mitigation measures in order to avoid outbreaks of vector-borne diseases. The interest in the Culex pipiens complex is justified by its medical importance: the females bite all warm-blooded vertebrates and are involved in the circulation of several arbovirus of concern to human health, like West Nile virus, iridoviruses, rheoviruses and parvoviruses. The abundance of Culex pipiens mosquitoes were documented systematically all over the territory by the local health services, in a long duration program running since 2006. The environmental factors used to characterize the vector habitat are land use/land cover, distance to cartographed water bodies, altitude and latitude. Focus will be on the mosquito females, which gonotrophic cycle mate-bloodmeal-oviposition is responsible for the virus transmission; its abundance is the key for the planning of non-aggressive prophylactic countermeasures that may eradicate the transmission risk and simultaneously avoid chemical ambient degradation. Meteorological parameters such as: air relative humidity, air temperature (minima, maxima and mean daily temperatures) and daily total rainfall were gathered from the weather stations network for the same dates and crossed with the standardized females’ abundance in a geographic information system (GIS). Mean capture and percentage of above average captures related to each variable are used as criteria to compute a threshold for each meteorological parameter; the difference of the mean capture above/below the threshold was statistically assessed. The meteorological parameters measured at the net of weather stations all over the country are averaged by month and interpolated to produce raster maps that can be segmented according to the meaningful thresholds for each parameter. The intersection of the maps of all the parameters obtained for each month show the evolution of the suitable meteorological conditions through the mosquito season, considered as May to October, although the first and last month are less relevant. In parallel, mean and above average captures were related to the physiographic parameters – the land use/land cover classes most relevant in each month, the altitudes preferred and the most frequent distance to water bodies, a factor closely related with the mosquito biology. The maps produced with these results were crossed with the meteorological maps previously segmented, in order to get an index of suitability for the complex Culex pipiens evaluated all over the country, and its evolution from the beginning to the end of the mosquitoes season.Keywords: suitability index, Culex pipiens, habitat evolution, GIS model
Procedia PDF Downloads 579716 The Temperature Degradation Process of Siloxane Polymeric Coatings
Authors: Andrzej Szewczak
Abstract:
Study of the effect of high temperatures on polymer coatings represents an important field of research of their properties. Polymers, as materials with numerous features (chemical resistance, ease of processing and recycling, corrosion resistance, low density and weight) are currently the most widely used modern building materials, among others in the resin concrete, plastic parts, and hydrophobic coatings. Unfortunately, the polymers have also disadvantages, one of which decides about their usage - low resistance to high temperatures and brittleness. This applies in particular thin and flexible polymeric coatings applied to other materials, such a steel and concrete, which degrade under varying thermal conditions. Research about improvement of this state includes methods of modification of the polymer composition, structure, conditioning conditions, and the polymerization reaction. At present, ways are sought to reflect the actual environmental conditions, in which the coating will be operating after it has been applied to other material. These studies are difficult because of the need for adopting a proper model of the polymer operation and the determination of phenomena occurring at the time of temperature fluctuations. For this reason, alternative methods are being developed, taking into account the rapid modeling and the simulation of the actual operating conditions of polymeric coating’s materials in real conditions. The nature of a duration is typical for the temperature influence in the environment. Studies typically involve the measurement of variation one or more physical and mechanical properties of such coating in time. Based on these results it is possible to determine the effects of temperature loading and develop methods affecting in the improvement of coatings’ properties. This paper contains a description of the stability studies of silicone coatings deposited on the surface of a ceramic brick. The brick’s surface was hydrophobized by two types of inorganic polymers: nano-polymer preparation based on dialkyl siloxanes (Series 1 - 5) and an aqueous solution of the silicon (series 6 - 10). In order to enhance the stability of the film formed on the brick’s surface and immunize it to variable temperature and humidity loading, the nano silica was added to the polymer. The right combination of the polymer liquid phase and the solid phase of nano silica was obtained by disintegration of the mixture by the sonification. The changes of viscosity and surface tension of polymers were defined, which are the basic rheological parameters affecting the state and the durability of the polymer coating. The coatings created on the brick’s surfaces were then subjected to a temperature loading of 100° C and moisture by total immersion in water, in order to determine any water absorption changes caused by damages and the degradation of the polymer film. The effect of moisture and temperature was determined by measurement (at specified number of cycles) of changes in the surface hardness (using a Vickers’ method) and the absorption of individual samples. As a result, on the basis of the obtained results, the degradation process of polymer coatings related to their durability changes in time was determined.Keywords: silicones, siloxanes, surface hardness, temperature, water absorption
Procedia PDF Downloads 245715 Inferring Influenza Epidemics in the Presence of Stratified Immunity
Authors: Hsiang-Yu Yuan, Marc Baguelin, Kin O. Kwok, Nimalan Arinaminpathy, Edwin Leeuwen, Steven Riley
Abstract:
Traditional syndromic surveillance for influenza has substantial public health value in characterizing epidemics. Because the relationship between syndromic incidence and the true infection events can vary from one population to another and from one year to another, recent studies rely on combining serological test results with syndromic data from traditional surveillance into epidemic models to make inference on epidemiological processes of influenza. However, despite the widespread availability of serological data, epidemic models have thus far not explicitly represented antibody titre levels and their correspondence with immunity. Most studies use dichotomized data with a threshold (Typically, a titre of 1:40 was used) to define individuals as likely recently infected and likely immune and further estimate the cumulative incidence. Underestimation of Influenza attack rate could be resulted from the dichotomized data. In order to improve the use of serosurveillance data, here, a refinement of the concept of the stratified immunity within an epidemic model for influenza transmission was proposed, such that all individual antibody titre levels were enumerated explicitly and mapped onto a variable scale of susceptibility in different age groups. Haemagglutination inhibition titres from 523 individuals and 465 individuals during pre- and post-pandemic phase of the 2009 pandemic in Hong Kong were collected. The model was fitted to serological data in age-structured population using Bayesian framework and was able to reproduce key features of the epidemics. The effects of age-specific antibody boosting and protection were explored in greater detail. RB was defined to be the effective reproductive number in the presence of stratified immunity and its temporal dynamics was compared to the traditional epidemic model using use dichotomized seropositivity data. Deviance Information Criterion (DIC) was used to measure the fitness of the model to serological data with different mechanisms of the serological response. The results demonstrated that the differential antibody response with age was present (ΔDIC = -7.0). The age-specific mixing patterns with children specific transmissibility, rather than pre-existing immunity, was most likely to explain the high serological attack rates in children and low serological attack rates in elderly (ΔDIC = -38.5). Our results suggested that the disease dynamics and herd immunity of a population could be described more accurately for influenza when the distribution of immunity was explicitly represented, rather than relying only on the dichotomous states 'susceptible' and 'immune' defined by the threshold titre (1:40) (ΔDIC = -11.5). During the outbreak, RB declined slowly from 1.22[1.16-1.28] in the first four months after 1st May. RB dropped rapidly below to 1 during September and October, which was consistent to the observed epidemic peak time in the late September. One of the most important challenges for infectious disease control is to monitor disease transmissibility in real time with statistics such as the effective reproduction number. Once early estimates of antibody boosting and protection are obtained, disease dynamics can be reconstructed, which are valuable for infectious disease prevention and control.Keywords: effective reproductive number, epidemic model, influenza epidemic dynamics, stratified immunity
Procedia PDF Downloads 262714 The Relationship between 21st Century Digital Skills and the Intention to Start a Digit Entrepreneurship
Authors: Kathrin F. Schneider, Luis Xavier Unda Galarza
Abstract:
In our modern world, few are the areas that are not permeated by digitalization: we use digital tools for work, study, entertainment, and daily life. Since technology changes rapidly, skills must adapt to the new reality, which gives a dynamic dimension to the set of skills necessary for people's academic, professional, and personal success. The concept of 21st-century digital skills, which includes skills such as collaboration, communication, digital literacy, citizenship, problem-solving, critical thinking, interpersonal skills, creativity, and productivity, have been widely discussed in the literature. Digital transformation has opened many economic opportunities for entrepreneurs for the development of their products, financing possibilities, and product distribution. One of the biggest advantages is the reduction in cost for the entrepreneur, which has opened doors not only for the entrepreneur or the entrepreneurial team but also for corporations through intrapreneurship. The development of students' general literacy level and their digital competencies is crucial for improving the effectiveness and efficiency of the learning process, as well as for students' adaptation to the constantly changing labor market. The digital economy allows a free substantial increase in the supply share of conditional and also innovative products; this is mainly achieved through 5 ways to reduce costs according to the conventional digital economy: search costs, replication, transport, tracking, and verification. Digital entrepreneurship worldwide benefits from such achievements. There is an expansion and democratization of entrepreneurship thanks to the use of digital technologies. The digital transformation that has been taking place in recent years is more challenging for developing countries, as they have fewer resources available to carry out this transformation while offering all the necessary support in terms of cybersecurity and educating their people. The degree of digitization (use of digital technology) in a country and the levels of digital literacy of its people often depend on the economic level and situation of the country. Telefónica's Digital Life Index (TIDL) scores are strongly correlated with country wealth, reflecting the greater resources that richer countries can contribute to promoting "Digital Life". According to the Digitization Index, Ecuador is in the group of "emerging countries", while Chile, Colombia, Brazil, Argentina, and Uruguay are in the group of "countries in transition". According to Herrera Espinoza et al. (2022), there are startups or digital ventures in Ecuador, especially in certain niches, but many of the ventures do not exceed six months of creation because they arise out of necessity and not out of the opportunity. However, there is a lack of relevant research, especially empirical research, to have a clearer vision. Through a self-report questionnaire, the digital skills of students will be measured in an Ecuadorian private university, according to the skills identified as the six 21st-century skills. The results will be put to the test against the variable of the intention to start a digital venture measured using the theory of planned behavior (TPB). The main hypothesis is that high digital competence is positively correlated with the intention to start digital entrepreneurship.Keywords: new literacies, digital transformation, 21st century skills, theory of planned behavior, digital entrepreneurship
Procedia PDF Downloads 108713 Determinants of Corporate Social Responsibility Adoption: Evidence from China
Authors: Jing (Claire) LI
Abstract:
More than two decades from 2000 to 2020 of economic reforms have brought China unprecedented economic growth. There is an urgent call of research towards corporate social responsibility (CSR) in the context of China because while China continues to develop into a global trading market, it suffers from various serious problems relating to CSR. This study analyses the factors affecting the adoption of CSR practices by Chinese listed companies. The author proposes a new framework of factors of CSR adoption. Following common organisational factors and external factors in the literature (including organisational support, company size, shareholder pressures, and government support), this study introduces two additional factors, dynamic capability and regional culture. A survey questionnaire was conducted on the CSR adoption of Chinese listed companies in Shen Zhen and Shang Hai index from December 2019 to March 2020. The survey was conducted to collect data on the factors that affect the adoption of CSR. After collection of data, this study performed factor analysis to reduce the number of measurement items to several main factors. This procedure is to confirm the proposed framework and ensure the significant factors. Through analysis, this study identifies four grouped factors as determinants of the CSR adoption. The first factor loading includes dynamic capability and organisational support. The study finds that they are positively related to the first factor, so the first factor mainly reflects the capabilities of companies, which is one component in internal factors. In the second factor, measurement items of stakeholder pressures mainly are from regulatory bodies, customer and supplier, employees and community, and shareholders. In sum, they are positively related to the second factor and they reflect stakeholder pressures, which is one component of external factors. The third factor reflects organisational characteristics. Variables include company size and cultural score. Among these variables, company size is negatively related to the third factor. The resulted factor loading of the third factor implies that organisational factor is an important determinant of CSR adoption. Cultural consistency, the variable in the fourth factor, is positively related to the factor. It represents the difference between perception of managers and actual culture of the organisations in terms of cultural dimensions, which is one component in internal factors. It implies that regional culture is an important factor of CSR adoption. Overall, the results are consistent with previous literature. This study is of significance from both theoretical and empirical perspectives. First, from the significance of theoretical perspective, this research combines stakeholder theory, dynamic capability view of a firm, and neo-institutional theory in CSR research. Based on association of these three theories, this study introduces two new factors (dynamic capability and regional culture) to have a better framework for CSR adoption. Second, this study contributes to empirical literature of CSR in the context of China. Extant Chinese companies lack recognition of the importance of CSR practices adoption. This study built a framework and may help companies to design resource allocation strategies and evaluate future CSR and management practices in an early stage.Keywords: China, corporate social responsibility, CSR adoption, dynamic capability, regional culture
Procedia PDF Downloads 136712 An Exploration of Policy-related Documents on District Heating and Cooling in Flanders: A Slow and Bottom-up Process
Authors: Isaura Bonneux
Abstract:
District heating and cooling (DHC) is increasingly recognized as a viable path towards sustainable heating and cooling. While some countries like Sweden and Denmark have a longstanding tradition of DHC, Belgium is lacking behind. The Northern part of Belgium, Flanders, had only a total of 95 heating networks in July 2023. Nevertheless, it is increasingly exploring its possibilities to enhance the scope of DHC. DHC is a complex energy system, requiring a lot of collaboration between various stakeholders on various levels. Therefore, it is of interest to look closer at policy-related documents at the Flemish (regional) level, as these policies set the scene for DHC development in the Flemish region. This kind of analysis has not been undertaken so far. This paper has the following research question: “Who talks about DHC, and in which way and context is DHC discussed in Flemish policy-related documents?” To answer this question, the Overton policy database was used to search and retrieve relevant policy-related documents. Overton retrieves data from governments, think thanks, NGOs, and IGOs. In total, out of the 244 original results, 117 documents between 2009 and 2023 were analyzed. Every selected document included theme keywords, policymaking department(s), date, and document type. These elements were used for quantitative data description and visualization. Further, qualitative content analysis revealed patterns and main themes regarding DHC in Flanders. Four main conclusions can be drawn: First, it is obvious from the timeframe that DHC is a new topic in Flanders with still limited attention; 2014, 2016 and 2017 were the years with the most documents, yet this number is still only 12 documents. In addition, many documents talked about DHC but not much in depth and painted it as a future scenario with a lot of uncertainty around it. The largest part of the issuing government departments had a link to either energy or climate (e.g. Flemish Environmental Agency) or policy (e.g. Socio-Economic Council of Flanders) Second, DHC is mentioned most within an ‘Environment and Sustainability’ context, followed by ‘General Policy and Regulation’. This is intuitive, as DHC is perceived as a sustainable heating and cooling technique and this analysis compromises policy-related documents. Third, Flanders seems mostly interested in using waste or residual heat as a heating source for DHC. The harbors and waste incineration plants are identified as potential and promising supply sources. This approach tries to conciliate environmental and economic incentives. Last, local councils get assigned a central role and the initiative is mostly taken by them. The policy documents and policy advices demonstrate that Flanders opts for a bottom-up organization. As DHC is very dependent on local conditions, this seems a logic step. Nevertheless, this can impede smaller councils to create DHC networks and slow down systematic and fast implementation of DHC throughout Flanders.Keywords: district heating and cooling, flanders, overton database, policy analysis
Procedia PDF Downloads 49711 Design, Construction, Validation And Use Of A Novel Portable Fire Effluent Sampling Analyser
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
Current large scale fire tests focus on flammability and heat release measurements. Smoke toxicity isn’t considered despite it being a leading cause of death and injury in unwanted fires. A key reason could be that the practical difficulties associated with quantifying individual toxic components present in a fire effluent often require specialist equipment and expertise. Fire effluent contains a mixture of unreactive and reactive gases, water, organic vapours and particulate matter, which interact with each other. This interferes with the operation of the analytical instrumentation and must be removed without changing the concentration of the target analyte. To mitigate the need for expensive equipment and time-consuming analysis, a portable gas analysis system was designed, constructed and tested for use in large-scale fire tests as a simpler and more robust alternative to online FTIR measurements. The novel equipment aimed to be easily portable and able to run on battery or mains electricity; be able to be calibrated at the test site; be capable of quantifying CO, CO2, O2, HCN, HBr, HCl, NOx and SO2 accurately and reliably; be capable of independent data logging; be capable of automated switchover of 7 bubblers; be able to withstand fire effluents; be simple to operate; allow individual bubbler times to be pre-set; be capable of being controlled remotely. To test the analysers functionality, it was used alongside the ISO/TS 19700 Steady State Tube Furnace (SSTF). A series of tests were conducted to assess the validity of the box analyser measurements and the data logging abilities of the apparatus. PMMA and PA 6.6 were used to assess the validity of the box analyser measurements. The data obtained from the bench-scale assessments showed excellent agreement. Following this, the portable analyser was used to monitor gas concentrations during large-scale testing using the ISO 9705 room corner test. The analyser was set up, calibrated and set to record smoke toxicity measurements in the doorway of the test room. The analyser was successful in operating without manual interference and successfully recorded data for 12 of the 12 tests conducted in the ISO room tests. At the end of each test, the analyser created a data file (formatted as .csv) containing the measured gas concentrations throughout the test, which do not require specialist knowledge to interpret. This validated the portable analyser’s ability to monitor fire effluent without operator intervention on both a bench and large-scale. The portable analyser is a validated and significantly more practical alternative to FTIR, proven to work for large-scale fire testing for quantification of smoke toxicity. The analyser is a cheaper, more accessible option to assess smoke toxicity, mitigating the need for expensive equipment and specialist operators.Keywords: smoke toxicity, large-scale tests, iso 9705, analyser, novel equipment
Procedia PDF Downloads 82