Search results for: courts of accounts
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 752

Search results for: courts of accounts

62 Influence of Interpersonal Communication on Family Planning Practices among Rural Women in South East Nigeria

Authors: Chinwe Okpoko, Vivian Atasie

Abstract:

One of the leading causes of death amongst women of child-bearing age in southeast Nigeria is pregnancy. Women in the reproductive age group die at a higher rate than men of the same age bracket. Furthermore, most maternal deaths occur among poor women who live in rural communities, and who generally fall within the low socio-economic group in society. Failure of policy makers and the media to create the strategic awareness and communication that conform with the sensibilities of this group account, in part, for the persistence of this malaise. Family planning (FP) is an essential component of safe motherhood, which is designed to ensure that women receive high-quality care to achieve an optimum level of health of mother and infant. The aim is to control the number of children a woman can give birth to and prevent maternal and child mortality and morbidity. This is what sustainable development goal (SDG) health target of World Health Organization (WHO) also strives to achieve. FP programmes reduce exposure to the risks of child-bearing. Indeed, most maternal deaths in the developing world can be prevented by fully investing simultaneously in FP and maternal and new-born care. Given the intrinsic value of communication in health care delivery, it is vital to adopt the most efficacious means of awareness creation and communication amongst rural women in FP. In a country where over 50% of her population resides in rural areas with attendant low-level profile standard of living, the need to communicate health information like FP through indigenous channels becomes pertinent. Interpersonal communication amongst family, friends, religious groups and other associations, is an efficacious means of communicating social issues in rural Africa. Communication in informal settings identifies with the values and social context of the recipients. This study therefore sought to determine the place of interpersonal communication on the knowledge of rural women on FP and how it influences uptake of FP. Descriptive survey design was used in the study, with interviewer administered questionnaire constituting the instrument for data collection. The questionnaire was administered on 385 women from rural communities in southeast Nigeria. The results show that majority (58.5%) of the respondents agreed that interpersonal communication helps women understand how to plan their family size. Many rural women (82%) prefer the short term natural method to the more effective modern contraceptive methods (38.1%). Husbands’ approval of FP, as indicated in the Mean response of 2.56, is a major factor that accounts for the adoption of FP messages among rural women. Socio-demographic data also reveal that educational attainment and/or exposure influenced women’s acceptance or otherwise of FP messages. The study, therefore, recommends amongst others, the targeting of husbands in subsequent FP communication interventions, since they play major role on contraceptive usage.

Keywords: family planning, interpersonal communication, interpersonal interaction, traditional communication

Procedia PDF Downloads 100
61 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks

Authors: Mazarine Roquet, Pierre Dewallef

Abstract:

The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.

Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating

Procedia PDF Downloads 44
60 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 225
59 Disrupting Traditional Industries: A Scenario-Based Experiment on How Blockchain-Enabled Trust and Transparency Transform Nonprofit Organizations

Authors: Michael Mertel, Lars Friedrich, Kai-Ingo Voigt

Abstract:

Based on principle-agent theory, an information asymmetry exists in the traditional donation process. Consumers cannot comprehend whether nonprofit organizations (NPOs) use raised funds according to the designated cause after the transaction took place (hidden action). Therefore, charity organizations have tried to appear transparent and gain trust by using the same marketing instruments for decades (e.g., releasing project success reports). However, none of these measures can guarantee consumers that charities will use their donations for the purpose. With awareness of misuse of donations rising due to the Ukraine conflict (e.g., funding crime), consumers are increasingly concerned about the destination of their charitable purposes. Therefore, innovative charities like the Human Rights Foundation have started to offer donations via blockchain. Blockchain technology has the potential to establish profound trust and transparency in the donation process: Consumers can publicly track the progress of their donation at any time after deciding to donate. This ensures that the charity is not using donations against its original intent. Hence, the aim is to investigate the effect of blockchain-enabled transactions on the willingness to donate. Sample and Design: To investigate consumers' behavior, we use a scenario-based experiment. After removing participants (e.g., due to failed attention checks), 3192 potential donors participated (47.9% female, 62.4% bachelor or above). Procedure: We randomly assigned the participants to one of two scenarios. In all conditions, the participants read a scenario about a fictive charity organization called "Helper NPO." Afterward, the participants answered questions regarding their perception of the charity. Manipulation: The first scenario (n = 1405) represents a typical donation process, where consumers donate money without any option to track and trace. The second scenario (n = 1787) represents a donation process via blockchain, where consumers can track and trace their donations respectively. Using t-statistics, the findings demonstrate a positive effect of donating via blockchain on participants’ willingness to donate (mean difference = 0.667, p < .001, Cohen’s d effect size = 0.482). A mediation analysis shows significant effects for the mediation of transparency (Estimate = 0.199, p < .001), trust (Estimate = 0.144, p < .001), and transparency and trust (Estimate = 0.158, p < .001). The total effect of blockchain usage on participants’ willingness to donate (Estimate = 0.690, p < .001) consists of the direct effect (Estimate = 0.189, p < .001) and the indirect effects of transparency and trust (Estimate = 0.501, p < .001). Furthermore, consumers' affinity for technology moderates the direct effect of blockchain usage on participants' willingness to donate (Estimate = 0.150, p < .001). Donating via blockchain is a promising way for charities to engage consumers for several reasons: (1) Charities can emphasize trust and transparency in their advertising campaigns. (2) Established charities can target new customer segments by specifically engaging technology-affine consumers in the future. (3) Charities can raise international funds without previous barriers (e.g., setting up bank accounts). Nevertheless, increased transparency can also backfire (e.g., disclosure of costs). Such cases require further research.

Keywords: blockchain, social sector, transparency, trust

Procedia PDF Downloads 63
58 Achieving Sustainable Agriculture with Treated Municipal Wastewater

Authors: Reshu Yadav, Himanshu Joshi, S. K. Tripathi

Abstract:

Fresh water is a scarce resource which is essential for humans and ecosystems, but its distribution is uneven. Agricultural production accounts for 70% of all surface water supplies. It is projected that against the expansion in the area equipped for irrigation by 0.6% per year, the global potential irrigation water demand would rise by 9.5% during 2021-25. This would, on one hand, have to compete against the sharply rising urban water demand. On the other, it would also have to face the fear of climate change, as temperatures rise and crop yields could drop from 10-30% in many large areas. The huge demand for irrigation combined with fresh water scarcity encourages to explore the reuse of wastewater as a resource. However, the use of such wastewater is often linked to the safety issues when used non judiciously or with poor safeguards while irrigating food crops. Paddy is one of the major crops globally and amongst the most important in South Asia and Africa. In many parts of the world, use of municipal wastewater has been promoted as a viable option in this regard. In developing and fast growing countries like India, regularly increasing wastewater generation rates may allow this option to be considered quite seriously. In view of this, a pilot field study was conducted at the Jagjeetpur Municipal Sewage treatment plant situated in the Haridwar town of Uttarakhand state, India. The objectives of the present study were to study the effect of treated wastewater on the production of various paddy varieties (Sharbati, PR-114, PB-1, Menaka, PB1121 and PB 1509) and emission of GHG gases (CO2, CH4 and N2O) as compared to the same varieties grown in the control plots irrigated with fresh water. Of late, the concept of water footprint assessment has emerged, which explains enumeration of various types of water footprints of an agricultural entity from its production to processing stages. Paddy, the most water demanding staple crop of Uttarakhand state, displayed a high green water footprint value of 2966.538 m3/ton. Most of the wastewater irrigated varieties displayed upto 6% increase in production, except Menaka and PB-1121, which showed a reduction in production (6% and 3% respectively), due to pest and insect infestation. The treated wastewater was observed to be rich in Nitrogen (55.94 mg/ml Nitrate), Phosphorus (54.24 mg/ml) and Potassium (9.78 mg/ml), thus rejuvenating the soil quality and not requiring any external nutritional supplements. Percentage increase of GHG gases on irrigation with treated municipal waste water as compared to control plots was observed as 0.4% - 8.6% (CH4), 1.1% - 9.2% (CO2), and 0.07% - 5.8% (N2O). The variety, Sharbati, displayed maximum production (5.5 ton/ha) and emerged as the most resistant variety against pests and insects. The emission values of CH4 ,CO2 and N2O were 729.31 mg/m2/d, 322.10 mg/m2/d and 400.21 mg/m2/d in water stagnant condition. This study highlighted a successful possibility of reuse of wastewater for non-potable purposes offering the potential for exploiting this resource that can replace or reduce existing use of fresh water sources in agricultural sector.

Keywords: greenhouse gases, nutrients, water footprint, wastewater irrigation

Procedia PDF Downloads 291
57 The Efficiency of Mechanization in Weed Control in Artificial Regeneration of Oriental Beech (Fagus orientalis Lipsky.)

Authors: Tuğrul Varol, Halil Barış Özel

Abstract:

In this study which has been conducted in Akçasu Forest Range District of Devrek Forest Directorate; 3 methods (cover removal with human force, cover removal with Hitachi F20 Excavator, and cover removal with agricultural equipment mounted on a Ferguson 240S agriculture tractor) utilized in weed control efforts in regeneration of degraded oriental beech forests have been compared. In this respect, 3 methods have been compared by determining certain work hours and standard durations of unit areas (1 hectare). For this purpose, evaluating the tasks made with human and machine force from the aspects of duration, productivity and costs, it has been aimed to determine the most productive method in accordance with the actual ecological conditions of research field. Within the scope of the study, the time studies have been conducted for 3 methods used in weed control efforts. While carrying out those studies, the performed implementations have been evaluated by dividing them into business stages. Also, the actual data have been used while calculating the cost accounts. In those calculations, the latest formulas and equations which are also used in developed countries have been utilized. The variance of analysis (ANOVA) was used in order to determine whether there is any statistically significant difference among obtained results, and the Duncan test was used for grouping if there is significant difference. According to the measurements and findings carried out within the scope of this study, it has been found during living cover removal efforts in regeneration efforts in demolished oriental beech forests that the removal of weed layer in 1 hectare of field has taken 920 hours with human force, 15.1 hours with excavator and 60 hours with an equipment mounted on a tractor. On the other hand, it has been determined that the cost of removal of living cover in unit area (1 hectare) was 3220.00 TL for man power, 788.70 TL for excavator and 2227.20 TL for equipment mounted on a tractor. According to the obtained results, it has been found that the utilization of excavator in weed control effort in regeneration of degraded oriental beech regions under actual ecological conditions of research field has been found to be more productive from both of aspects of duration and costs. These determinations carried out should be repeated in weed control efforts in degraded forest fields with different ecological conditions, it is compulsory for finding the most efficient weed control method. These findings will light the way of technical staff of forestry directorate in determination of the most effective and economic weed contol method. Thus, the more actual data will be used while preparing the weed control budgets, and there will be significant contributions to national economy. Also the results of this and similar studies are very important for developing the policies for our forestry in short and long term.

Keywords: artificial regeneration, weed control, oriental beech, productivity, mechanization, man power, cost analysis

Procedia PDF Downloads 381
56 Forced Migration and Access to Maternal Healthcare in Internally Displaced Persons Camps in North-Central Nigeria

Authors: Faith O. Olanrewaju

Abstract:

Internal displacement and the vulnerability of women are two critical aspects of forced migration that have dominated both global and local discourses. Statistics show that in November 2021, there were over 2.1 million internally displaced persons (IDPs) in Nigeria. Literature also states that displaced women and girls are more vulnerable than displaced men. They are susceptible to adversative experiences, including various forms of sexual violence and rape. As a result, the displaced women and girls are faced with psychological and physical traumas, including HIV/AIDS as well as unexpected or poorly spaced pregnancies. In addition, the poor condition of living of internally displaced women in IDP camps affects their reproductive health, pregnancy outcomes, and maternal mortality levels. Incontrovertibly, internally displaced women constitute an imperative contributor to the ills of Nigeria's maternal health status, which is the second worse globally and the worst in Africa. World Health Organisation statistics showed that approximately 536,000 girls and women die from pregnancy-related causes globally, and Nigeria accounts for 14% of the global maternal deaths. Undeniably, this supports the claims that maternal mortality remains a challenge in Nigeria and can be exacerbated by internal displacement crises. Therefore, maternal mortality remains a critical impediment to the actualisation of the 3.1 SDG target. Owing to this, concerns arise about the quality of the policy in Nigeria’s health sector. More specifically, this study is concerned with the maternal health care services displaced women receive in IDP camps in the three states affected by internal displacement in north-central Nigeria, an understudied area. The novelty of the study also lies in its comparative investigation of maternal healthcare service delivery in three different camp structures (faith-based, government, and informal IDP camps), a pattern that is absent in literature. Therefore, this study will investigate how the camp structures affect access to maternal health services in the study areas; analyse the successes and challenges in the delivery of maternal health care services to displaced women in the various camps; and recommendation and strategies for reducing maternal healthcare disparities/gaps across IDP camps in Nigeria (should they exist). It will adopt a mixed-method approach and multi-stage sampling technique. A total of 1,152 copies of the study questionnaire will be distributed to displaced pregnant and nursing mothers (PNM); nine focus group discussions will also be held with the displaced PNM; in-depth interviews will be conducted with humanitarian actors, policymakers, and health professionals. The quantitative and qualitative data will be analysed using Statistical Package for Social Science (SPSS) 21.0 and thematic analysis, respectively. The findings of the study will be used to develop a model of care that will address the fragmentations in Nigeria's healthcare system. The findings will also inform the development of best policies and practices in the maternal health of displaced women.

Keywords: forced displacement, internally displaced women, maternal healthcare, maternal mortality

Procedia PDF Downloads 138
55 Diagnosis, Treatment, and Prognosis in Cutaneous Anaplastic Lymphoma Kinase-Positive Anaplastic Large Cell Lymphoma: A Narrative Review Apropos of a Case

Authors: Laura Gleason, Sahithi Talasila, Lauren Banner, Ladan Afifi, Neda Nikbakht

Abstract:

Primary cutaneous anaplastic large cell lymphoma (pcALCL) accounts for 9% of all cutaneous T-cell lymphomas. pcALCL is classically characterized as a solitary papulonodule that often enlarges, ulcerates, and can be locally destructive, but overall exhibits an indolent course with overall 5-year survival estimated to be 90%. Distinguishing pcALCL from systemic ALCL (sALCL) is essential as sALCL confers a poorer prognosis with average 5-year survival being 40-50%. Although extremely rare, there have been several cases of ALK-positive ALCL diagnosed on skin biopsy without evidence of systemic involvement, which poses several challenges in the classification, prognostication, treatment, and follow-up of these patients. Objectives: We present a case of cutaneous ALK-positive ALCL without evidence of systemic involvement, and a narrative review of the literature to further characterize that ALK-positive ALCL limited to the skin is a distinct variant with a unique presentation, history, and prognosis. A 30-year-old woman presented for evaluation of an erythematous-violaceous papule present on her right chest for two months. With the development of multifocal disease and persistent lymphadenopathy, a bone marrow biopsy and lymph node excisional biopsy were performed to assess for systemic disease. Both biopsies were unrevealing. The patient was counseled on pursuing systemic therapy consisting of Brentuximab, Cyclophosphamide, Doxorubicin, and Prednisone given the concern for sALCL. Apropos of the patient we searched for clinically evident, cutaneous ALK-positive ALCL cases, with and without systemic involvement, in the English literature. Risk factors, such as tumor location, number, size, ALK localization, ALK translocations, and recurrence, were evaluated in cases of cutaneous ALK-positive ALCL. The majority of patients with cutaneous ALK-positive ALCL did not progress to systemic disease. The majority of cases that progressed to systemic disease in adults had recurring skin lesions and cytoplasmic localization of ALK. ALK translocations did not influence disease progression. Mean time to disease progression was 16.7 months, and significant mortality (50%) was observed in those cases that progressed to systemic disease. Pediatric cases did not exhibit a trend similar to adult cases. In both the adult and pediatric cases, a subset of cutaneous-limited ALK-positive ALCL were treated with chemotherapy. All cases treated with chemotherapy did not progress to systemic disease. Apropos of an ALK-positive ALCL patient with clinical cutaneous limited disease in the histologic presence of systemic markers, we discussed the literature data, highlighting the crucial issues related to developing a clinical strategy to approach this rare subtype of ALCL. Physicians need to be aware of the overall spectrum of ALCL, including cutaneous limited disease, systemic disease, disease with NPM-ALK translocation, disease with ALK and EMA positivity, and disease with skin recurrence.

Keywords: anaplastic large cell lymphoma, systemic, cutaneous, anaplastic lymphoma kinase, ALK, ALCL, sALCL, pcALCL, cALCL

Procedia PDF Downloads 53
54 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method

Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola

Abstract:

The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.

Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization

Procedia PDF Downloads 331
53 Biomass Waste-To-Energy Technical Feasibility Analysis: A Case Study for Processing of Wood Waste in Malta

Authors: G. A. Asciak, C. Camilleri, A. Rizzo

Abstract:

The waste management in Malta is a national challenge. Coupled with Malta’s recent economic boom, which has seen massive growth in several sectors, especially the construction industry, drastic actions need to be taken. Wood waste, currently being dumped in landfills, is one type of waste which has increased astronomically. This research study aims to carry out a thorough examination on the possibility of using this waste as a biomass resource and adopting a waste-to-energy technology in order to generate electrical energy. This study is composed of three distinct yet interdependent phases, namely, data collection from the local SMEs, thermal analysis using the bomb calorimeter, and generation of energy from wood waste using a micro biomass plant. Data collection from SMEs specializing in wood works was carried out to obtain information regarding the available types of wood waste, the annual weight of imported wood, and to analyse the manner in which wood shavings are used after wood is manufactured. From this analysis, it resulted that five most common types of wood available in Malta which would suitable for generating energy are Oak (hardwood), Beech (hardwood), Red Beech (softwood), African Walnut (softwood) and Iroko (hardwood). Subsequently, based on the information collected, a thermal analysis using a 6200 Isoperibol calorimeter on the five most common types of wood was performed. This analysis was done so as to give a clear indication with regards to the burning potential, which will be valuable when testing the wood in the biomass plant. The experiments carried out in this phase provided a clear indication that the African Walnut generated the highest gross calorific value. This means that this type of wood released the highest amount of heat during the combustion in the calorimeter. This is due to the high presence of extractives and lignin, which accounts for a slightly higher gross calorific value. This is followed by Red Beech and Oak. Moreover, based on the findings of the first phase, both the African Walnut and Red Beech are highly imported in the Maltese Islands for use in various purposes. Oak, which has the third highest gross calorific value is the most imported and common wood used. From the five types of wood, three were chosen for use in the power plant on the basis of their popularity and their heating values. The PP20 biomass plant was used to burn the three types of shavings in order to compare results related to the estimated feedstock consumed by the plant, the high temperatures generated, the time taken by the plant to produce gasification temperatures, and the projected electrical power attributed to each wood type. From the experiments, it emerged that whilst all three types reached the required gasification temperature and thus, are feasible for electrical energy generation. African Walnut was deemed to be the most suitable fast-burning fuel. This is followed by Red-beech and Oak, which required a longer period of time to reach the required gasification temperatures. The results obtained provide a clear indication that wood waste can not only be treated instead of being dumped in dumped in landfill but coupled.

Keywords: biomass, isoperibol calorimeter, waste-to-energy technology, wood

Procedia PDF Downloads 210
52 The Disease That 'Has a Woman Face': Feminization of HIV/AIDS in Nagaland, North-East India

Authors: Kitoholi V. Zhimo

Abstract:

Unlike the cases of cases of homosexuals, haemophilic and or drug users in USA, France, Africa and other countries, in India the first case of HIV/AIDS was detected in heterosexual female sex workers (FSW) in Chennai in 1986. This image played an important role in understanding HIV/AIDS scenario in the country. Similar to popular and dominant metaphors on HIV/AIDS such as ‘gay plague’, ‘new cancer’, ‘lethal disease’, ‘slim disease’, ‘foreign disease’, ‘junkie disease’, etc. around the world, the social construction of the virus was largely attributed to women in India. It was established that women particularly sex workers are ‘carrier’ and ‘transmitter’ of virus and were categorised as High Risk Groups (HRG’s) alongside homosexuals, transgenders and injecting drug users. Recent literature reveals growing rate of HIV infection among housewives since 1997 which revolutionised public health scenario in India. This means shift from high risk group to general public through ‘bridge population’ encompassing long distance truckers and migrant labours who at the expense of their nature of work and mobility comes in contact with HRG’s and transmit the virus to the general public especially women who are confined to the domestic space. As HIV epidemic expands, married women in monogamous relationship/marriage stand highly susceptible to infection with limited control, right and access over their sexual and reproductive health and planning. In context of Nagaland, a small state in North-eastern part of India HIV/AIDS transmission through injecting drug use dominated the early scene of the epidemic. However, paradigm shift occurred with declining trend of HIV prevalence among injecting drug users (IDU’s) over the past years with the introduction of Opioid Substitution Therapy (OST) and easy access/availability of syringes and injecting needles. Reflection on statistical data reveals that out of 36 states and union territories in India, the position of Nagaland in HIV prevalence among IDU’s has significantly dropped down from 6th position in 2003 to 16th position in 2017. The present face of virus in Nagaland is defined by (hetero) sexual mode of transmission which accounts for about 91% of as reported by Nagaland state AIDS control society (NSACS) in 2016 wherein young and married woman were found to be most affected leading to feminization of HIV/AIDS epidemic in the state. Thus, not only is HIV epidemic feminised but emerged victim to domestic violence which is more often accepted as normal part of heterosexual relationship. In the backdrop of these understanding, the present paper based on ethnographic fieldwork explores the plight, lived experiences and images of HIV+ve women with regard to sexual and reproductive rights against the backdrop of patriarchal system in Nagaland.

Keywords: HIV/AIDS, monogamy, Nagaland, sex worker disease, women

Procedia PDF Downloads 136
51 Scenarios of Digitalization and Energy Efficiency in the Building Sector in Brazil: 2050 Horizon

Authors: Maria Fatima Almeida, Rodrigo Calili, George Soares, João Krause, Myrthes Marcele Dos Santos, Anna Carolina Suzano E. Silva, Marcos Alexandre Da

Abstract:

In Brazil, the building sector accounts for 1/6 of energy consumption and 50% of electricity consumption. A complex sector with several driving actors plays an essential role in the country's economy. Currently, the digitalization readiness in this sector is still low, mainly due to the high investment costs and the difficulty of estimating the benefits of digital technologies in buildings. Nevertheless, the potential contribution of digitalization for increasing energy efficiency in the building sector in Brazil has been pointed out as relevant in the political and sectoral contexts, both in the medium and long-term horizons. To contribute to the debate on the possible evolving trajectories of digitalization in the building sector in Brazil and to subsidize the formulation or revision of current public policies and managerial decisions, three future scenarios were created to anticipate the potential energy efficiency in the building sector in Brazil due to digitalization by 2050. This work aims to present these scenarios as a basis to foresight the potential energy efficiency in this sector, according to different digitalization paces - slow, moderate, or fast in the 2050 horizon. A methodological approach was proposed to create alternative prospective scenarios, combining the Global Business Network (GBN) and the Laboratory for Investigation in Prospective Strategy and Organisation (LIPSOR) methods. This approach consists of seven steps: (i) definition of the question to be foresighted and time horizon to be considered (2050); (ii) definition and classification of a set of key variables, using the prospective structural analysis; (iii) identification of the main actors with an active role in the digital and energy spheres; (iv) characterization of the current situation (2021) and identification of main uncertainties that were considered critical in the development of alternative future scenarios; (v) scanning possible futures using morphological analysis; (vi) selection and description of the most likely scenarios; (vii) foresighting the potential energy efficiency in each of the three scenarios, namely slow digitalization; moderate digitalization, and fast digitalization. Each scenario begins with a core logic and then encompasses potentially related elements, including potential energy efficiency. Then, the first scenario refers to digitalization at a slow pace, with induction by the government limited to public buildings. In the second scenario, digitalization is implemented at a moderate pace, induced by the government in public, commercial, and service buildings, through regulation integrating digitalization and energy efficiency mechanisms. Finally, in the third scenario, digitalization in the building sector is implemented at a fast pace in the country and is strongly induced by the government, but with broad participation of private investments and accelerated adoption of digital technologies. As a result of the slow pace of digitalization in the sector, the potential for energy efficiency stands at levels below 10% of the total of 161TWh by 2050. In the moderate digitalization scenario, the potential reaches 20 to 30% of the total 161TWh by 2050. Furthermore, in the rapid digitalization scenario, it will reach 30 to 40% of the total 161TWh by 2050.

Keywords: building digitalization, energy efficiency, scenario building, prospective structural analysis, morphological analysis

Procedia PDF Downloads 82
50 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 46
49 Assessment of On-Site Solar and Wind Energy at a Manufacturing Facility in Ireland

Authors: A. Sgobba, C. Meskell

Abstract:

The feasibility of on-site electricity production from solar and wind and the resulting load management for a specific manufacturing plant in Ireland are assessed. The industry sector accounts directly and indirectly for a high percentage of electricity consumption and global greenhouse gas emissions; therefore, it will play a key role in emission reduction and control. Manufacturing plants, in particular, are often located in non-residential areas since they require open spaces for production machinery, parking facilities for the employees, appropriate routes for supply and delivery, special connections to the national grid and other environmental impacts. Since they have larger spaces compared to commercial sites in urban areas, they represent an appropriate case study for evaluating the technical and economic viability of energy system integration with low power density technologies, such as solar and wind, for on-site electricity generation. The available open space surrounding the analysed manufacturing plant can be efficiently used to produce a discrete quantity of energy, instantaneously and locally consumed. Therefore, transmission and distribution losses can be reduced. The usage of storage is not required due to the high and almost constant electricity consumption profile. The energy load of the plant is identified through the analysis of gas and electricity consumption, both internally monitored and reported on the bills. These data are not often recorded and available to third parties since manufacturing companies usually keep track only of the overall energy expenditures. The solar potential is modelled for a period of 21 years based on global horizontal irradiation data; the hourly direct and diffuse radiation and the energy produced by the system at the optimum pitch angle are calculated. The model is validated using PVWatts and SAM tools. Wind speed data are available for the same period within one-hour step at a height of 10m. Since the hub of a typical wind turbine reaches a higher altitude, complementary data for a different location at 50m have been compared, and a model for the estimate of wind speed at the required height in the right location is defined. Weibull Statistical Distribution is used to evaluate the wind energy potential of the site. The results show that solar and wind energy are, as expected, generally decoupled. Based on the real case study, the percentage of load covered every hour by on-site generation (Level of Autonomy LA) and the resulting electricity bought from the grid (Expected Energy Not Supplied EENS) are calculated. The economic viability of the project is assessed through Net Present Value, and the influence the main technical and economic parameters have on NPV is presented. Since the results show that the analysed renewable sources can not provide enough electricity, the integration with a cogeneration technology is studied. Finally, the benefit to energy system integration of wind, solar and a cogeneration technology is evaluated and discussed.

Keywords: demand, energy system integration, load, manufacturing, national grid, renewable energy sources

Procedia PDF Downloads 101
48 Modern Architecture and the Scientific World Conception

Authors: Sean Griffiths

Abstract:

Introduction: This paper examines the expression of ‘objectivity’ in architecture in the context of the post-war rejection of this concept. It aims to re-examine the question in light of the assault on truth characterizing contemporary culture and of the unassailable truth of the climate emergency. The paper analyses the search for objective truth as it was prosecuted in the Modern Movement in the early 20th century, looking at the extent to which this quest was successful in contributing to the development of a radically new, politically-informed architecture and the extent to which its particular interpretation of objectivity, limited that development. The paper studies the influence of the Vienna Circle philosophers Rudolph Carnap and Otto Neurath on the pedagogy of the Bauhaus and the architecture of the Neue Sachlichkeit in Germany. Their logical positivism sought to determine objective truths through empirical analysis, expressed in an austere formal language as part of a ‘scientific world conception’ which would overcome metaphysics and unverifiable mystification. These ideas, and the concurrent prioritizing of measurement as the determinant of environmental quality, became key influences in the socially-driven architecture constructed in the 1920s and 30s by Bauhaus architects in numerous German Cities. Methodology: The paper reviews the history of the early Modern Movement and summarizes accounts of the relationship between the Vienna Circle and the Bauhaus. It looks at key differences in the approaches Neurath and Carnap took to the achievement of their shared philosophical and political aims. It analyses how the adoption of Carnap’s foundationalism influenced the architectural language of modern architecture and compares, through a close reading of the structure of Neurath’s ‘protocol sentences,’ the latter’s alternative approach, speculating on the possibility that its adoption offered a different direction of travel for Modern Architecture. Findings: The paper finds that the adoption of Carnap’s foundationalism, while helping Modern Architecture forge a new visual language, ultimately limited its development and is implicated in its failure to escape the very metaphysics against which it had set itself. It speculates that Neurath’s relational language-based approach to the issue of establishing objectivity has its architectural corollary in the process of revision and renovation that offers new ways an ‘objective’ language of architecture might be developed in a manner that is more responsive to our present-day crisis. Conclusion: The philosophical principles of the Vienna Circle and the architects of the Modern Movement had much in common. Both contributed to radical historical departures which sought to instantiate a world scientific conception in their respective fields, which would attempt to banish mystification and metaphysics and would align itself with socialism. However, in adopting Carnap’s foundationalism as the theoretical basis for the new architecture, Modern Architecture not only failed to escape metaphysics but arguably closed off new avenues of development to itself. The adoption of Neurath’s more open-ended and interactive approach to objectivity offers possibilities for new conceptions of the expression of objectivity in architecture that might be more tailored to the multiple crises we face today.

Keywords: Bauhaus, logical positivism, Neue Sachlichkeit, rationalism, Vienna Circle

Procedia PDF Downloads 48
47 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 50
46 Mandate of Heaven and Serving the People in Chinese Political Rhetoric: An Evolving Discourse System across Three Thousand Years

Authors: Weixiao Wei, Chris Shei

Abstract:

This paper describes Mandate of Heaven as a source of justification for the ruling regime from ancient China approximately three thousand years ago. Initially, the kings of Shang dynasty simply nominated themselves as the sons of Heaven sent to Earth to rule the common people. As the last generation of the kings became corrupted and ruled withbrutal force and crueltywhich directly caused their destruction, the successive kings of Zhou dynasty realised the importance of virtue and the provision of goods to the people. Legitimacy of the ruling regimes became rested not entirely on random allocation of the throne by an unknown supernatural force but on a foundation comprising morality and the ability to provide goods. The latter composite was picked up by the current ruling regime, the Chinese Communist Party, and became the cornerstone of its political legitimacy, also known as ‘performance legitimacy’ where economic development accounts for the satisfaction of the people in place of election and other democratic means of providing legal-rational legitimacy. Under this circumstance, it becomes important as well for the ruling party to use political rhetoric to convince people of the good performance of the government in the economy, morality, and foreign policy. Thus, we see a lot of propaganda materials in both government policy statements and international press conference announcements. The former consists mainly of important speeches made by prominent figures in Party conferences which are not only made publicly available on the government websites but also become obligatory reading materials for university entrance examinations. The later consists of announcements about foreign policies and strategies and actions taken by the government regarding foreign affairsmade in international conferences and offered in Chinese-English bilingual versions on official websites. This documentation strategy creates an impressive image of the Chinese Communist Party that is domestically competent and international strong, taking care of the people it governs in terms of economic needs and defending the country against any foreign interference and global adversities. This political discourse system comprising reading materials fully extractable from government websites also becomes excellent repertoire for teaching and researching in contemporary Chinese language, discourse and rhetoric, Chinese culture and tradition, Chinese political ideology, and Chinese-English translation. This paper aims to provide a detailed and comprehensive description of the current Chinese political discourse system, arguing about its lineage from the rhetorical convention of Mandate of Heaven in ancient China and its current concentration on serving the people in place of election, human rights, and freedom of speech. The paper will also provide guidelines as to how this discourse system and the manifestation of official documents created under this system can become excellent research and teaching materials in applied linguistics.

Keywords: mandate of heaven, Chinese communist party, performance legitimacy, serving the people, political discourse

Procedia PDF Downloads 79
45 Nonequilibrium Effects in Photoinduced Ultrafast Charge Transfer Reactions

Authors: Valentina A. Mikhailova, Serguei V. Feskov, Anatoly I. Ivanov

Abstract:

In the last decade the nonequilibrium charge transfer have attracted considerable interest from the scientific community. Examples of such processes are the charge recombination in excited donor-acceptor complexes and the intramolecular electron transfer from the second excited electronic state. In these reactions the charge transfer proceeds predominantly in the nonequilibrium mode. In the excited donor-acceptor complexes the nuclear nonequilibrium is created by the pump pulse. The intramolecular electron transfer from the second excited electronic state is an example where the nuclear nonequilibrium is created by the forward electron transfer. The kinetics of these nonequilibrium reactions demonstrate a number of peculiar properties. Most important from them are: (i) the absence of the Marcus normal region in the free energy gap law for the charge recombination in excited donor-acceptor complexes, (ii) extremely low quantum yield of thermalized charge separated state in the ultrafast charge transfer from the second excited state, (iii) the nonexponential charge recombination dynamics in excited donor-acceptor complexes, (iv) the dependence of the charge transfer rate constant on the excitation pulse frequency. This report shows that most of these kinetic features can be well reproduced in the framework of stochastic point-transition multichannel model. The model involves an explicit description of the nonequilibrium excited state formation by the pump pulse and accounts for the reorganization of intramolecular high-frequency vibrational modes, for their relaxation as well as for the solvent relaxation. The model is able to quantitatively reproduce complex nonequilibrium charge transfer kinetics observed in modern experiments. The interpretation of the nonequilibrium effects from a unified point of view in the terms of the multichannel point transition stochastic model allows to see similarities and differences of electron transfer mechanism in various molecular donor-acceptor systems and formulates general regularities inherent in these phenomena. The nonequilibrium effects in photoinduced ultrafast charge transfer which have been studied for the last 10 years are analyzed. The methods of suppression of the ultrafast charge recombination, similarities and dissimilarities of electron transfer mechanism in different molecular donor-acceptor systems are discussed. The extremely low quantum yield of the thermalized charge separated state observed in the ultrafast charge transfer from the second excited state in the complex consisting of 1,2,4-trimethoxybenzene and tetracyanoethylene in acetonitrile solution directly demonstrates that its effectiveness can be close to unity. This experimental finding supports the idea that the nonequilibrium charge recombination in the excited donor-acceptor complexes can be also very effective so that the part of thermalized complexes is negligible. It is discussed the regularities inherent to the equilibrium and nonequilibrium reactions. Their fundamental differences are analyzed. Namely the opposite dependencies of the charge transfer rates on the dynamical properties of the solvent. The increase of the solvent viscosity results in decreasing the thermal rate and vice versa increasing the nonequilibrium rate. The dependencies of the rates on the solvent reorganization energy and the free energy gap also can considerably differ. This work was supported by the Russian Science Foundation (Grant No. 16-13-10122).

Keywords: Charge recombination, higher excited states, free energy gap law, nonequilibrium

Procedia PDF Downloads 287
44 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.

Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration

Procedia PDF Downloads 128
43 The Effects of Lithofacies on Oil Enrichment in Lucaogou Formation Fine-Grained Sedimentary Rocks in Santanghu Basin, China

Authors: Guoheng Liu, Zhilong Huang

Abstract:

For more than the past ten years, oil and gas production from marine shale such as the Barnett shale. In addition, in recent years, major breakthroughs have also been made in lacustrine shale gas exploration, such as the Yanchang Formation of the Ordos Basin in China. Lucaogou Formation shale, which is also lacustrine shale, has also yielded a high production in recent years, for wells such as M1, M6, and ML2, yielding a daily oil production of 5.6 tons, 37.4 tons and 13.56 tons, respectively. Lithologic identification and classification of reservoirs are the base and keys to oil and gas exploration. Lithology and lithofacies obviously control the distribution of oil and gas in lithological reservoirs, so it is of great significance to describe characteristics of lithology and lithofacies of reservoirs finely. Lithofacies is an intrinsic property of rock formed under certain conditions of sedimentation. Fine-grained sedimentary rocks such as shale formed under different sedimentary conditions display great particularity and distinctiveness. Hence, to our best knowledge, no constant and unified criteria and methods exist for fine-grained sedimentary rocks regarding lithofacies definition and classification. Consequently, multi-parameters and multi-disciplines are necessary. A series of qualitative descriptions and quantitative analysis were used to figure out the lithofacies characteristics and its effect on oil accumulation of Lucaogou formation fine-grained sedimentary rocks in Santanghu basin. The qualitative description includes core description, petrographic thin section observation, fluorescent thin-section observation, cathode luminescence observation and scanning electron microscope observation. The quantitative analyses include X-ray diffraction, total organic content analysis, ROCK-EVAL.II Methodology, soxhlet extraction, porosity and permeability analysis and oil saturation analysis. Three types of lithofacies were mainly well-developed in this study area, which is organic-rich massive shale lithofacies, organic-rich laminated and cloddy hybrid sedimentary lithofacies and organic-lean massive carbonate lithofacies. Organic-rich massive shale lithofacies mainly include massive shale and tuffaceous shale, of which quartz and clay minerals are the major components. Organic-rich laminated and cloddy hybrid sedimentary lithofacies contain lamina and cloddy structure. Rocks from this lithofacies chiefly consist of dolomite and quartz. Organic-lean massive carbonate lithofacies mainly contains massive bedding fine-grained carbonate rocks, of which fine-grained dolomite accounts for the main part. Organic-rich massive shale lithofacies contain the highest content of free hydrocarbon and solid organic matter. Moreover, more pores were developed in organic-rich massive shale lithofacies. Organic-lean massive carbonate lithofacies contain the lowest content solid organic matter and develop the least amount of pores. Organic-rich laminated and cloddy hybrid sedimentary lithofacies develop the largest number of cracks and fractures. To sum up, organic-rich massive shale lithofacies is the most favorable type of lithofacies. Organic-lean massive carbonate lithofacies is impossible for large scale oil accumulation.

Keywords: lithofacies classification, tuffaceous shale, oil enrichment, Lucaogou formation

Procedia PDF Downloads 177
42 W-WING: Aeroelastic Demonstrator for Experimental Investigation into Whirl Flutter

Authors: Jiri Cecrdle

Abstract:

This paper describes the concept of the W-WING whirl flutter aeroelastic demonstrator. Whirl flutter is the specific case of flutter that accounts for the additional dynamic and aerodynamic influences of the engine rotating parts. The instability is driven by motion-induced unsteady aerodynamic propeller forces and moments acting in the propeller plane. Whirl flutter instability is a serious problem that may cause the unstable vibration of a propeller mounting, leading to the failure of an engine installation or an entire wing. The complicated physical principle of whirl flutter required the experimental validation of the analytically gained results. W-WING aeroelastic demonstrator has been designed and developed at Czech Aerospace Research Centre (VZLU) Prague, Czechia. The demonstrator represents the wing and engine of the twin turboprop commuter aircraft. Contrary to the most of past demonstrators, it includes a powered motor and thrusting propeller. It allows the changes of the main structural parameters influencing the whirl flutter stability characteristics. Propeller blades are adjustable at standstill. The demonstrator is instrumented by strain gauges, accelerometers, revolution-counting impulse sensor, sensor of airflow velocity, and the thrust measurement unit. Measurement is supported by the in house program providing the data storage and real-time depiction in the time domain as well as pre-processing into the form of the power spectral densities. The engine is linked with a servo-drive unit, which enables maintaining of the propeller revolutions (constant or controlled rate ramp) and monitoring of immediate revolutions and power. Furthermore, the program manages the aerodynamic excitation of the demonstrator by the aileron flapping (constant, sweep, impulse). Finally, it provides the safety guard to prevent any structural failure of the demonstrator hardware. In addition, LMS TestLab system is used for the measurement of the structure response and for the data assessment by means of the FFT- and OMA-based methods. The demonstrator is intended for the experimental investigations in the VZLU 3m-diameter low-speed wind tunnel. The measurement variant of the model is defined by the structural parameters: pitch and yaw attachment stiffness, pitch and yaw hinge stations, balance weight station, propeller type (duralumin or steel blades), and finally, angle of attack of the propeller blade 75% section (). The excitation is provided either by the airflow turbulence or by means of the aerodynamic excitation by the aileron flapping using a frequency harmonic sweep. The experimental results are planned to be utilized for validation of analytical methods and software tools in the frame of development of the new complex multi-blade twin-rotor propulsion system for the new generation regional aircraft. Experimental campaigns will include measurements of aerodynamic derivatives and measurements of stability boundaries for various configurations of the demonstrator.

Keywords: aeroelasticity, flutter, whirl flutter, W WING demonstrator

Procedia PDF Downloads 53
41 Service Quality, Skier Satisfaction, and Behavioral Intentions in Leisure Skiing: The Case of Beijing

Authors: Shunhong Qi, Hui Tian

Abstract:

Triggered off by the forthcoming 2022 Winter Olympics, ski centers are blossoming in China, the number being 742 in 2018. Although the number of skier visits of ski resorts soared to 19.7 million in 2018, one-time skiers account for a considerable portion therein. In light of the extremely low return rates and skiing penetration level (0.5%) of leisure skiing in China, this study proposes and tests a leisure ski service performance framework which assesses the ski resorts’ service quality, skier satisfaction, as well as their impact on skiers’ behavioral intentions, with an aim to assess the success of ski resorts and provide suggestions for improvement. Three self-administered surveys and 16 interviews were conducted upon a convenience sample of leisure skiers in two major ski destinations within two hours’ drive from Beijing – Nanshan and Jundushan ski resorts. Of the 680 questionnaires distributed, 416 usable copies were returned, the response rate being 61.2%. The questionnaire used for the study was developed based on the existing literature of 'push' factors of skiers (intrinsic desire) and 'pull' factors (attractiveness of a destination), as well as leisure sport satisfaction. The scale comprises four parts: skiers’ demographic profiles, their perceived service quality (including ski resorts’ infrastructure, expense, safety and comfort, convenience, daily needs support, skill development support, and accessibility), their overall levels of satisfaction (satisfaction with the service and the experience), and their behavioral intentions (including loyalty, future visitation and greater tolerance of price increases). Skiers’ demographic profiles show that among the 220 males and 196 females in the survey, a vast majority of the skiers are age 17-39 (87.2%). 64.7% are not married, and nearly half (48.3%) of the skiers have a monthly family income exceeding 10,000 yuan (USD 1,424), and 80% are beginners or intermediate skiers. The regression examining the influence of service quality on skier satisfaction reveals that service quality accounts for 44.4% of the variance in skier satisfaction, the variables of safety and comfort, expense, skill development support, and accessibility contributing significantly in descending order. Another regression analyzing the influence of service quality as well as skier satisfaction on their behavioral intentions shows that service quality and skier satisfaction account for 39.1% of the variance in skiers’ behavioral intentions, and the significant predictors are skier satisfaction, safety and comfort, expense, and accessibility, in descending order, though a comparison between groups also indicates that for expert skiers, the significant variables are skier satisfaction, skill development support, safety, and comfort. Suggestions are thus made for ski resorts and other stakeholders to improve skier satisfaction and increase visitation: developing diversified ski courses to meet the demands of skiers of different skiing skills and to reduce crowding, adopting enough chairlifts and magic carpets, reinforcing safety measures and medical force; further exploring their various resources and lower the skiing expense on ski pass, equipment renting, accommodation and dining; adding more bus lines and/or develop platforms for skiers’ car-pooling, and offering diversified skiing activities with local flavors for better entertainment.

Keywords: behavioral intentions, leisure skiing, service quality, skier satisfaction

Procedia PDF Downloads 65
40 Concepts of Modern Design: A Study of Art and Architecture Synergies in Early 20ᵗʰ Century Europe

Authors: Stanley Russell

Abstract:

Until the end of the 19th century, European painting dealt almost exclusively with the realistic representation of objects and landscapes, as can be seen in the work of realist artists like Gustav Courbet. Architects of the day typically made reference to and recreated historical precedents in their designs. The curriculum of the first architecture school in Europe, The Ecole des Beaux Artes, based on the study of classical buildings, had a profound effect on the profession. Painting exhibited an increasing level of abstraction from the late 19th century, with impressionism, and the trend continued into the early 20th century when Cubism had an explosive effect sending shock waves through the art world that also extended into the realm of architectural design. Architect /painter Le Corbusier with “Purism” was one of the first to integrate abstract painting and building design theory in works that were equally shocking to the architecture world. The interrelationship of the arts, including architecture, was institutionalized in the Bauhaus curriculum that sought to find commonality between diverse art disciplines. Renowned painter and Bauhaus instructor Vassily Kandinsky was one of the first artists to make a semi-scientific analysis of the elements in “non-objective” painting while also drawing parallels between painting and architecture in his book Point and Line to plane. Russian constructivists made abstract compositions with simple geometric forms, and like the De Stijl group of the Netherlands, they also experimented with full-scale constructions and spatial explorations. Based on the study of historical accounts and original artworks, of Impressionism, Cubism, the Bauhaus, De Stijl, and Russian Constructivism, this paper begins with a thorough explanation of the art theory and several key works from these important art movements of the late 19th and early 20th century. Similarly, based on written histories and first-hand experience of built and drawn works, the author continues with an analysis of the theories and architectural works generated by the same groups, all of which actively pursued continuity between their art and architectural concepts. With images of specific works, the author shows how the trend toward abstraction and geometric purity in painting coincided with a similar trend in architecture that favored simple unornamented geometries. Using examples like the Villa Savoye, The Schroeder House, the Dessau Bauhaus, and unbuilt designs by Russian architect Chernikov, the author gives detailed examples of how the intersection of trends in Art and Architecture led to a unique and fruitful period of creative synergy when the same concepts that were used by artists to generate paintings were also used by architects in the making of objects, space, and buildings. In Conclusion, this article examines the extremely pivotal period in art and architecture history from the late 19th to early 20th century when the confluence of art and architectural theory led to many painted, drawn, and built works that continue to inspire architects and artists to this day.

Keywords: modern art, architecture, design methodologies, modern architecture

Procedia PDF Downloads 96
39 Improved Elastoplastic Bounding Surface Model for the Mathematical Modeling of Geomaterials

Authors: Andres Nieto-Leal, Victor N. Kaliakin, Tania P. Molina

Abstract:

The nature of most engineering materials is quite complex. It is, therefore, difficult to devise a general mathematical model that will cover all possible ranges and types of excitation and behavior of a given material. As a result, the development of mathematical models is based upon simplifying assumptions regarding material behavior. Such simplifications result in some material idealization; for example, one of the simplest material idealization is to assume that the material behavior obeys the elasticity. However, soils are nonhomogeneous, anisotropic, path-dependent materials that exhibit nonlinear stress-strain relationships, changes in volume under shear, dilatancy, as well as time-, rate- and temperature-dependent behavior. Over the years, many constitutive models, possessing different levels of sophistication, have been developed to simulate the behavior geomaterials, particularly cohesive soils. Early in the development of constitutive models, it became evident that elastic or standard elastoplastic formulations, employing purely isotropic hardening and predicated in the existence of a yield surface surrounding a purely elastic domain, were incapable of realistically simulating the behavior of geomaterials. Accordingly, more sophisticated constitutive models have been developed; for example, the bounding surface elastoplasticity. The essence of the bounding surface concept is the hypothesis that plastic deformations can occur for stress states either within or on the bounding surface. Thus, unlike classical yield surface elastoplasticity, the plastic states are not restricted only to those lying on a surface. Elastoplastic bounding surface models have been improved; however, there is still need to improve their capabilities in simulating the response of anisotropically consolidated cohesive soils, especially the response in extension tests. Thus, in this work an improved constitutive model that can more accurately predict diverse stress-strain phenomena exhibited by cohesive soils was developed. Particularly, an improved rotational hardening rule that better simulate the response of cohesive soils in extension. The generalized definition of the bounding surface model provides a convenient and elegant framework for unifying various previous versions of the model for anisotropically consolidated cohesive soils. The Generalized Bounding Surface Model for cohesive soils is a fully three-dimensional, time-dependent model that accounts for both inherent and stress induced anisotropy employing a non-associative flow rule. The model numerical implementation in a computer code followed an adaptive multistep integration scheme in conjunction with local iteration and radial return. The one-step trapezoidal rule was used to get the stiffness matrix that defines the relationship between the stress increment and the strain increment. After testing the model in simulating the response of cohesive soils through extensive comparisons of model simulations to experimental data, it has been shown to give quite good simulations. The new model successfully simulates the response of different cohesive soils; for example, Cardiff Kaolin, Spestone Kaolin, and Lower Cromer Till. The simulated undrained stress paths, stress-strain response, and excess pore pressures are in very good agreement with the experimental values, especially in extension.

Keywords: bounding surface elastoplasticity, cohesive soils, constitutive model, modeling of geomaterials

Procedia PDF Downloads 290
38 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 131
37 Analysing the Influence of COVID-19 on Major Agricultural Commodity Prices in South Africa

Authors: D. Mokatsanyane, J. Jansen Van Rensburg

Abstract:

This paper analyses the influence and impact of COVID-19 on major agricultural commodity prices in South Africa. According to a World Bank report, the agricultural sector in South Africa has been unable to reduce the domestic food crisis that has been occurring over the past years, hence the increased rate of poverty, which is currently at 55.5 percent as of April 2020. Despite the significance of this sector, empirical findings concluded that the agricultural sector now accounts for 1.88 percent of South Africa's gross domestic product (GDP). Suggesting that the agricultural sector's contribution to the economy has diminished. Despite the low contribution to GDP, this primary sector continues to play an essential role in the economy. Over the past years, multiple factors have contributed to the soaring commodities prices, namely, climate shocks, biofuel demand, demand and supply shocks, the exchange rate, speculation in commodity derivative markets, trade restrictions, and economic growth. The COVID-19 outbursts have currently disturbed the supply and demand of staple crops. To address the disruption, the government has exempted the agricultural sector from closure and restrictions on movement. The spread of COVID-19 has caused turmoil all around the world, but mostly in developing countries. According to Statistic South Africa, South Africa's economy decreased by seven percent in 2020. Consequently, this has arguably made the agricultural sector the most affected sector since slumped economic growth negatively impacts food security, trade, farm livelihood, and greenhouse gas emissions. South Africa is sensitive to the fruitfulness of global food chains. Restrictions in trade, reinforced sanitary control systems, and border controls have influenced food availability and prices internationally. The main objective of this study is to evaluate the behavior of agricultural commodity prices pre-and during-COVID to determine the impact of volatility drivers on these crops. Historical secondary data of spot prices for the top five major commodities, namely white maize, yellow maize, wheat, soybeans, and sunflower seeds, are analysed from 01 January 2017 to 1 September 2021. The timeframe was chosen to capture price fluctuations between pre-COVID-19 (01 January 2017 to 23 March 2020) and during-COVID-19 (24 March 2020 to 01 September 2021). The Generalised Autoregressive Conditional Heteroscedasticity (GARCH) statistical model will be used to measure the influence of price fluctuations. The results reveal that the commodity market has been experiencing volatility at different points. Extremely high volatility is represented during the first quarter of 2020. During this period, there was high uncertainty, and grain prices were very volatile. Despite the influence of COVID-19 on agricultural prices, the demand for these commodities is still existing and decent. During COVID-19, analysis indicates that prices were low and less volatile during the pandemic. The prices and returns of these commodities were low during COVID-19 because of the government's actions to respond to the virus's spread, which collapsed the market demand for food commodities.

Keywords: commodities market, commodity prices, generalised autoregressive conditional heteroscedasticity (GARCH), Price volatility, SAFEX

Procedia PDF Downloads 138
36 Numerical Modeling of Timber Structures under Varying Humidity Conditions

Authors: Sabina Huč, Staffan Svensson, Tomaž Hozjan

Abstract:

Timber structures may be exposed to various environmental conditions during their service life. Often, the structures have to resist extreme changes in the relative humidity of surrounding air, with simultaneously carrying the loads. Wood material response for this load case is seen as increasing deformation of the timber structure. Relative humidity variations cause moisture changes in timber and consequently shrinkage and swelling of the material. Moisture changes and loads acting together result in mechano-sorptive creep, while sustained load gives viscoelastic creep. In some cases, magnitude of the mechano-sorptive strain can be about five times the elastic strain already at low stress levels. Therefore, analyzing mechano-sorptive creep and its influence on timber structures’ long-term behavior is of high importance. Relatively many one-dimensional rheological models for rheological behavior of wood can be found in literature, while a number of models coupling creep response in each material direction is limited. In this study, mathematical formulation of a coupled two-dimensional mechano-sorptive model and its application to the experimental results are presented. The mechano-sorptive model constitutes of a moisture transport model and a mechanical model. Variation of the moisture content in wood is modelled by multi-Fickian moisture transport model. The model accounts for processes of the bound-water and water-vapor diffusion in wood, that are coupled through sorption hysteresis. Sorption defines a nonlinear relation between moisture content and relative humidity. Multi-Fickian moisture transport model is able to accurately predict unique, non-uniform moisture content field within the timber member over time. Calculated moisture content in timber members is used as an input to the mechanical analysis. In the mechanical analysis, the total strain is assumed to be a sum of the elastic strain, viscoelastic strain, mechano-sorptive strain, and strain due to shrinkage and swelling. Mechano-sorptive response is modelled by so-called spring-dashpot type of a model, that proved to be suitable for describing creep of wood. Mechano-sorptive strain is dependent on change of moisture content. The model includes mechano-sorptive material parameters that have to be calibrated to the experimental results. The calibration is made to the experiments carried out on wooden blocks subjected to uniaxial compressive loaded in tangential direction and varying humidity conditions. The moisture and the mechanical model are implemented in a finite element software. The calibration procedure gives the required, distinctive set of mechano-sorptive material parameters. The analysis shows that mechano-sorptive strain in transverse direction is present, though its magnitude and variation are substantially lower than the mechano-sorptive strain in the direction of loading. The presented mechano-sorptive model enables observing real temporal and spatial distribution of the moisture-induced strains and stresses in timber members. Since the model’s suitability for predicting mechano-sorptive strains is shown and the required material parameters are obtained, a comprehensive advanced analysis of the stress-strain state in timber structures, including connections subjected to constant load and varying humidity is possible.

Keywords: mechanical analysis, mechano-sorptive creep, moisture transport model, timber

Procedia PDF Downloads 218
35 Investigating the Association between Escherichia Coli Infection and Breast Cancer Incidence: A Retrospective Analysis and Literature Review

Authors: Nadia Obaed, Lexi Frankel, Amalia Ardeljan, Denis Nigel, Anniki Witter, Omar Rashid

Abstract:

Breast cancer is the most common cancer among women, with a lifetime risk of one in eight of all women in the United States. Although breast cancer is prevalent throughout the world, the uneven distribution in incidence and mortality rates is shaped by the variation in population structure, environment, genetics and known lifestyle risk factors. Furthermore, the bacterial profile in healthy and cancerous breast tissue differs with a higher relative abundance of bacteria capable of causing DNA damage in breast cancer patients. Previous bacterial infections may change the composition of the microbiome and partially account for the environmental factors promoting breast cancer. One study found that higher amounts of Staphylococcus, Bacillus, and Enterobacteriaceae, of which Escherichia coli (E. coli) is a part, were present in breast tumor tissue. Based on E. coli’s ability to damage DNA, it is hypothesized that there is an increased risk of breast cancer associated with previous E. coli infection. Therefore, the purpose of this study was to evaluate the correlation between E. coli infection and the incidence of breast cancer. Holy Cross Health, Fort Lauderdale, provided access to the Health Insurance Portability and Accountability (HIPAA) compliant national database for the purpose of academic research. International Classification of Disease 9th and 10th Codes (ICD-9, ICD-10) was then used to conduct a retrospective analysis using data from January 2010 to December 2019. All breast cancer diagnoses and all patients infected versus not infected with E. coli that underwent typical E. coli treatment were investigated. The obtained data were matched for age, Charlson Comorbidity Score (CCI score), and antibiotic treatment. Standard statistical methods were applied to determine statistical significance and an odds ratio was used to estimate the relative risk. A total of 81286 patients were identified and analyzed from the initial query and then reduced to 31894 antibiotic-specific treated patients in both the infected and control group, respectively. The incidence of breast cancer was 2.51% and present in 2043 patients in the E. coli group compared to 5.996% and present in 4874 patients in the control group. The incidence of breast cancer was 3.84% and present in 1223 patients in the treated E. coli group compared to 6.38% and present in 2034 patients in the treated control group. The decreased incidence of breast cancer in the E. coli and treated E. coli groups was statistically significant with a p-value of 2.2x10-16 and 2.264x10-16, respectively. The odds ratio in the E. coli and treated E. coli groups was 0.784 and 0.787 with a 95% confidence interval, respectively (0.756-0.813; 0.743-0.833). The current study shows a statistically significant decrease in breast cancer incidence in association with previous Escherichia coli infection. Researching the relationship between single bacterial species is important as only up to 10% of breast cancer risk is attributable to genetics, while the contribution of environmental factors including previous infections potentially accounts for a majority of the preventable risk. Further evaluation is recommended to assess the potential and mechanism of E. coli in decreasing the risk of breast cancer.

Keywords: breast cancer, escherichia coli, incidence, infection, microbiome, risk

Procedia PDF Downloads 217
34 Establishing Ministerial Social Media Handles for Public Grievances Redressal and Reciprocation System

Authors: Ashish Kumar Dwivedi

Abstract:

Uttar Pradesh is largest part of Indian Federal system encapsulating twenty two million populations and has huge cultural, economic and religious diversity. The newly elected 18 months old state leadership of Uttar Pradesh has envisaged and initiated various proactive strides for the public grievance redressal and inclusive development schemes for all the sections of population from its very day of assumption of the office by Hon’ble Chief Minster Shri Yogi Adtiyanath. These initiatives also include Departmental responses via social media handles as Twitter, Facebook Page, and Web interaction. In the same course, every department of state government has been guided for the correct usage of verified social media handle separately and in co-ordination with other departments. These guidelines included making new WhatsApp groups to connect technocrats and politicians to communicate on common platform. Minister for Department of Infrastructure and Industrial Development, Shri Satish Mahana is a very popular leader and very intuitive statesman, has thousands of followers on social media and his accounts receive almost three hundred individually mentioned notifications from the various parts of Uttar Pradesh. These notifications primarily include problems related to livelihood and grievances concerned to department. To address these communications, a body of five experts has been set who are actively responding on various levels and increase bureaucratic engagements with marginalized sections of society. With reference to above background, this piece of research was conducted to analyze, categorize and derive effective implementation of public policies via social media platforms. This act of responsiveness has brought positive change in the mindset of population for the government, which was missed earlier. Department of Industrial Development in the Government is also inclined to attract investors aiming to become first trillion-dollar economy of India henceforth department also organized two major successful events in last one year. These events were also frame worked on social media platform to update 2.5 million population of state who is actively using social media in many ways. To analyze change scientifically, this study has been conducted and big data has been collected from October 2017 to September 2018 from the departmental social media handles as Twitter, Facebook, and emails. For this data, a statistical study has been conducted to analyze sentiments and expectations, specific and common requirement of communities, nature of grievances and their effective elucidation fitted into government policies. The control sample has also been taken from previous government activities to analyze the change. The statistical study used tools such as correlation study and principal component analysis. Also in this research communication, the modus operandi of grievance redressal, proliferation of government policies, connections to their beneficiaries and quick response procedure will be discussed.

Keywords: correlation study, principal component analysis, bureaucratic engagements, social media

Procedia PDF Downloads 96
33 Problem-Based Learning for Hospitality Students. The Case of Madrid Luxury Hotels and the Recovery after the Covid Pandemic

Authors: Caridad Maylin-Aguilar, Beatriz Duarte-Monedero

Abstract:

Problem-based learning (PBL) is a useful tool for adult and practice oriented audiences, as University students. As a consequence of the huge disruption caused by the COVID pandemic in the hospitality industry, hotels of all categories closed down in Spain from March 2020. Since that moment, the luxury segment was blooming with optimistic prospects for new openings. Hence, Hospitality students were expecting a positive situation in terms of employment and career development. By the beginning of the 2020-21 academic year, these expectations were seriously harmed. By October 2020, only 9 of the 32 hotels in the luxury segment were opened with an occupation rate of 9%. Shortly after, the evidence of a second wave affecting especially Spain and the homelands of incoming visitors bitterly smashed all forecasts. In accordance with the situation, a team of four professors and practitioners, from four different subject areas, developed a real case, inspired in one of these hotels, the 5-stars Emperatriz by Barceló. Students in their 2nd course were provided with real information as marketing plans, profit and losses and operational accounts, employees profiles and employment costs. The challenge for them was to act as consultants, identifying potential courses of action, related to best, base and worst case. In order to do that, they were organized in teams and supported by 4th course students. Each professor deployed the problem in their subject; thus, research on the customers behavior and feelings were necessary to review, as part of the marketing plan, if the current offering of the hotel was clear enough to guarantee and to communicate a safe environment, as well as the ranking of other basic, supporting and facilitating services. Also, continuous monitoring of competitors’ activity was necessary to understand what was the behavior of the open outlets. The actions designed after the diagnose were ranked in accordance with their impact and feasibility in terms of time and resources. Also they must be actionable by the current staff of the hotel and their managers and a vision of internal marketing was appreciated. After a process of refinement, seven teams presented their conclusions to Emperatriz general manager and the rest of professors. Four main ideas were chosen, and all the teams, irrespectively of authorship, were asked to develop them to the state of a minimum viable product, with estimations of impacts and costs. As the process continues, students are nowadays accompanying the hotel and their staff in the prudent reopening of facilities, almost one year after the closure. From a professor’s point of view, key learnings were 1.- When facing a real problem, a holistic view is needed. Therefore, the vision of subjects as silos collapses, 2- When educating new professionals, providing them with the resilience and resistance necessaries to deal with a problem is always mandatory, but now seems more relevant and 3.- collaborative work and contact with real practitioners in such an uncertain and changing environment is a challenge, but it is worth when considering the learning result and its potential.

Keywords: problem-based learning, hospitality recovery, collaborative learning, resilience

Procedia PDF Downloads 161