Search results for: national council for educational research and training
599 Intended Use of Genetically Modified Organisms, Advantages and Disadvantages
Authors: Pakize Ozlem Kurt Polat
Abstract:
GMO (genetically modified organism) is the result of a laboratory process where genes from the DNA of one species are extracted and artificially forced into the genes of an unrelated plant or animal. This technology includes; nucleic acid hybridization, recombinant DNA, RNA, PCR, cell culture and gene cloning techniques. The studies are divided into three groups of properties transferred to the transgenic plant. Up to 59% herbicide resistance characteristic of the transfer, 28% resistance to insects and the virus seems to be related to quality characteristics of 13%. Transgenic crops are not included in the commercial production of each product; mostly commercial plant is soybean, maize, canola, and cotton. Day by day increasing GMO interest can be listed as follows; Use in the health area (Organ transplantation, gene therapy, vaccines and drug), Use in the industrial area (vitamins, monoclonal antibodies, vaccines, anti-cancer compounds, anti -oxidants, plastics, fibers, polyethers, human blood proteins, and are used to produce carotenoids, emulsifiers, sweeteners, enzymes , food preservatives structure is used as a flavor enhancer or color changer),Use in agriculture (Herbicide resistance, Resistance to insects, Viruses, bacteria, fungi resistance to disease, Extend shelf life, Improving quality, Drought , salinity, resistance to extreme conditions such as frost, Improve the nutritional value and quality), we explain all this methods step by step in this research. GMO has advantages and disadvantages, which we explain all of them clearly in full text, because of this topic, worldwide researchers have divided into two. Some researchers thought that the GMO has lots of disadvantages and not to be in use, some of the researchers has opposite thought. If we look the countries law about GMO, we should know Biosafety law for each country and union. For this Biosecurity reasons, the problems caused by the transgenic plants, including Turkey, to minimize 130 countries on 24 May 2000, ‘the United Nations Biosafety Protocol’ signed nudes. This protocol has been prepared in addition to Cartagena Biosafety Protocol entered into force on September 11, 2003. This protocol GMOs in general use by addressing the risks to human health, biodiversity and sustainable transboundary movement of all GMOs that may affect the prevention, transit covers were dealt and used. Under this protocol we have to know the, ‘US Regulations GMO’, ‘European Union Regulations GMO’, ‘Turkey Regulations GMO’. These three different protocols have different applications and rules. World population increasing day by day and agricultural fields getting smaller for this reason feeding human and animal we should improve agricultural product yield and quality. Scientists trying to solve this problem and one solution way is molecular biotechnology which is including the methods of GMO too. Before decide to support or against the GMO, should know the GMO protocols and it effects.Keywords: biotechnology, GMO (genetically modified organism), molecular marker
Procedia PDF Downloads 233598 Social Economic Factors Associated with the Nutritional Status of Children In Western Uganda
Authors: Baguma Daniel Kajura
Abstract:
The study explores socio-economic factors, health related and individual factors that influence the breastfeeding habits of mothers and their effect on the nutritional status of their infants in the Rwenzori region of Western Uganda. A cross-sectional research design was adopted, and it involved the use of self-administered questionnaires, interview guides, and focused group discussion guides to assess the extent to which socio-demographic factors associated with breastfeeding practices influence child malnutrition. Using this design, data was collected from 276 mother-paired infants out of the selected 318 mother-paired infants over a period of ten days. Using a sample size formula by Kish Leslie for cross-sectional studies N= Zα2 P (1- P) / δ2, where N= sample size estimate of paired mother paired infants. P= assumed true population prevalence of mother–paired infants with malnutrition cases, P = 29.3%. 1-P = the probability of mother-paired infants not having malnutrition, so 1-P = 70.7% Zα = Standard normal deviation at 95% confidence interval corresponding to 1.96.δ = Absolute error between the estimated and true population prevalence of malnutrition of 5%. The calculated sample size N = 1.96 × 1.96 (0.293 × 0.707) /0,052= 318 mother paired infants. Demographic and socio-economic data for all mothers were entered into Microsoft Excel software and then exported to STATA 14 (StataCorp, 2015). Anthropometric measurements were taken for all children by the researcher and the trained assistants who physically weighed the children. The use of immunization card was used to attain the age of the child. The bivariate logistic regression analysis was used to assess the relationship between socio-demographic factors associated with breastfeeding practices and child malnutrition. The multivariable regression analysis was used to draw a conclusion on whether or not there are any true relationships between the socio-demographic factors associated with breastfeeding practices as independent variables and child stunting and underweight as dependent variables in relation to breastfeeding practices. Descriptive statistics on background characteristics of the mothers were generated and presented in frequency distribution tables. Frequencies and means were computed, and the results were presented using tables, then, we determined the distribution of stunting and underweight among infants by the socioeconomic and demographic factors. Findings reveal that children of mothers who used milk substitutes besides breastfeeding are over two times more likely to be stunted compared to those whose mothers exclusively breastfed them. Feeding children with milk substitutes instead of breastmilk predisposes them to both stunting and underweight. Children of mothers between 18 and 34 years of age are less likely to be underweight, as were those who were breastfed over ten times a day. The study further reveals that 55% of the children were underweight, and 49% were stunted. Of the underweight children, an equal number (58/151) were either mildly or moderately underweight (38%), and 23% (35/151) were severely underweight. Empowering community outreach programs by increasing knowledge and increased access to services on integrated management of child malnutrition is crucial to curbing child malnutrition in rural areas.Keywords: infant and young child feeding, breastfeeding, child malnutrition, maternal health
Procedia PDF Downloads 21597 Multilingual Students Acting as Language Brokers in Italy: Their Points of View and Feelings towards This Activity
Authors: Federica Ceccoli
Abstract:
Italy is undergoing one of its largest migratory waves, and Italian schools are reporting the highest numbers of multilingual students coming from immigrant families and speaking minority languages. For these pupils, who have not perfectly acquired their mother tongue yet, learning a second language may represent a burden on their linguistic development and may have some repercussions on their school performances and relational skills. These are some of the reasons why they have turned out to be those who have the worst grades and the highest school drop-out rates. However, despite these negative outcomes, it has been demonstrated that multilingual immigrant students frequently act as translators or language brokers for their peers or family members who do not speak Italian fluently. This activity has been defined as Child Language Brokering (hereinafter CLB) and it has become a common practice especially in minority communities as immigrants’ children often learn the host language much more quickly than their parents, thus contributing to their family life by acting as language and cultural mediators. This presentation aims to analyse the data collected by a research carried out during the school year 2014-2015 in the province of Ravenna, in the Northern Italian region of Emilia-Romagna, among 126 immigrant students attending junior high schools. The purpose of the study was to analyse by means of a structured questionnaire whether multilingualism matched with language brokering experiences or not and to examine the perspectives of those students who reported having acted as translators using their linguistic knowledge to help people understand each other. The questionnaire consisted of 34 items roughly divided into 2 sections. The first section required multilingual students to provide personal details like their date and place of birth, as well as details about their families (number of siblings, parents’ jobs). In the second section, they were asked about the languages spoken in their families as well as their language brokering experience. The in-depth questionnaire sought to investigate a wide variety of brokering issues such as frequency and purpose of the activity, where, when and which documents young language brokers translate and how they feel about this practice. The results have demonstrated that CLB is a very common practice among immigrants’ children living in Ravenna and almost all students reported positive feelings when asked about their brokering experience with their families and also at school. In line with previous studies, responses to the questionnaire item regarding the people they brokered for revealed that the category ranking first is parents. Similarly, language-brokering activities tend to occur most often at home and the documents they translate the most (either orally or in writing) are notes from teachers. Such positive feelings towards this activity together with the evidence that it occurs very often in schools have laid the foundation for further projects on how this common practice may be valued and used to strengthen the linguistic skills of these multilingual immigrant students and thus their school performances.Keywords: immigration, language brokering, multilingualism, students' points of view
Procedia PDF Downloads 179596 Characteristics of the Mortars Obtained by Radioactive Recycled Sand
Authors: Claudiu Mazilu, Ion Robu, Radu Deju
Abstract:
At the end of 2011 worldwide there were 124 power reactors shut down, from which: 16 fully decommissioned, 50 power reactors in a decommissioning process, 49 reactors in “safe enclosure mode”, 3 reactors “entombed”, for other 6 reactors it was not yet have specified the decommissioning strategy. The concrete radioactive waste that will be generated from dismantled structures of VVR-S nuclear research reactor from Magurele (e.g.: biological shield of the reactor core and hot cells) represents an estimated amount of about 70 tons. Until now the solid low activity radioactive waste (LLW) was pre-placed in containers and cementation with mortar made from cement and natural fine aggregates, providing a fill ratio of the container of approximately 50 vol. % for concrete. In this paper is presented an innovative technology in which radioactive concrete is crushed and the mortar made from recycled radioactive sand, cement, water and superplasticizer agent is poured in container with radioactive rubble (that is pre-placed in container) for cimentation. Is achieved a radioactive waste package in which the degree of filling of radioactive waste increases substantially. The tests were carried out on non-radioactive material because the radioactive concrete was not available in a good time. Waste concrete with maximum size of 350 mm were crushed in the first stage with a Liebhher type jaw crusher, adjusted to nominal size of 50 mm. Crushed concrete less than 50 mm was sieved in order to obtain useful sort for preplacement, 10 to 50 mm. The rest of the screening > 50 mm obtained from primary crushing of concrete was crushed in the second stage, with different working principles crushers at size < 2.5 mm, in order to produce recycled fine aggregate (sand) for the filler mortar and which fulfills the technical specifications proposed: –jaw crusher, Retsch type, model BB 100; –hammer crusher, Buffalo Shuttle model WA-12-H; presented a series of characteristics of recycled concrete aggregates by predefined class (the granulosity, the granule shape, the absorption of water, behavior to the Los Angeles test, the content of attached mortar etc.), most in comparison with characteristics of natural aggregates. Various mortar recipes were used in order to identify those that meet the proposed specification (flow-rate: 16-50s, no bleeding, min. 30N/mm2 compressive strength of the mortar after 28 days, the proportion of recycled sand used in mortar: min. 900kg/m3) and allow obtaining of the highest fill ratio for mortar. In order to optimize the mortars following compositional factors were varied: aggregate nature, water/cement (W/C) ratio, sand/cement (S/C) ratio, nature and proportion of additive. To confirm the results obtained on a small scale, it made an attempt to fill the mortar in a container that simulates the final storage drums. Was measured the mortar fill ratio (98.9%) compared with the results of laboratory tests and targets set out in the proposed specification. Although fill ratio obtained on the mock-up is lower by 0.8 vol. % compared to that obtained in the laboratory tests (99.7%), the result meets the specification criteria.Keywords: characteristics, radioactive recycled concrete aggregate, mortars, fill ratio
Procedia PDF Downloads 194595 Long-Term Conservation Tillage Impact on Soil Properties and Crop Productivity
Authors: Danute Karcauskiene, Dalia Ambrazaitiene, Regina Skuodiene, Monika Vilkiene, Regina Repsiene, Ieva Jokubauskaite
Abstract:
The main ambition for nowadays agriculture is to get the economically effective yield and to secure the soil ecological sustainability. According to the effect on the main soil quality indexes, tillage systems may be separated into two types, conventional and conservation tillage. The goal of this study was to determine the impact of conservation and conventional primary soil tillage methods and soil fertility improvement measures on soil properties and crop productivity. Methods: The soil of the experimental site is Dystric Glossic Retisol (WRB 2014) with texture of sandy loam. The trial was established in 2003 in the experimental field of crop rotation of Vėžaičiai Branch of Lithuanian Research Centre for Agriculture and Forestry. Trial factors and treatments: factor A- primary soil tillage in (autumn): deep ploughing (20-25cm), shallow ploughing (10-12cm), shallow ploughless tillage (8-10cm); factor B – soil fertility improvement measures: plant residues, plant residues + straw, green manure 1st cut + straw, farmyard manure 40tha-1 + straw. The four - course crop rotation consisted of red clover, winter wheat, spring rape and spring barley with undersown. Results: The tillage had no statistically significant effect on topsoil (0-10 cm) pHKCl level, it was 5.5 - 5.7. During all experiment period, the highest soil pHKCl level (5.65) was in the shallow ploughless tillage. The organic fertilizers particularly the biomass of grass and farmyard manure had tendency to increase the soil pHKCl. The content of plant - available phosphorus and potassium significantly increase in the shallow ploughing compared with others tillage systems. The farmyard manure increases those elements in whole arable layer. The dissolved organic carbon concentration was significantly higher in the 0 - 10 cm soil layer in the shallow ploughless tillage compared with deep ploughing. After the incorporation of clover biomass and farmyard manure the concentration of dissolved organic carbon increased in the top soil layer. During all experiment period the largest amount of water stable aggregates was determined in the soil where the shallow ploughless tillage was applied. It was by 12% higher compared with deep ploughing. During all experiment time, the soil moisture was higher in the shallow ploughing and shallow ploughless tillage (9-27%) compared to deep ploughing. The lowest emission of CO2 was determined in the deep ploughing soil. The highest rate of CO2 emission was in shallow ploughless tillage. The addition of organic fertilisers had a tendency to increase the CO2 emission, but there was no statistically significant effect between the different types of organic fertilisers. The crop yield was larger in the deep ploughing soil compared to the shallow and shallow ploughless tillage.Keywords: reduced tillage, soil structure, soil pH, biological activity, crop productivity
Procedia PDF Downloads 267594 TRAC: A Software Based New Track Circuit for Traffic Regulation
Authors: Jérôme de Reffye, Marc Antoni
Abstract:
Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling
Procedia PDF Downloads 332593 Persuading ICT Consumers to Disconnect from Work: An Experimental Study on the Influence of Message Frame, Regulatory Focus, Ad Believability and Attitude toward the Ad on Message Effectiveness
Authors: Katharina Ninaus, Ralf Terlutter, Sandra Diehl
Abstract:
Information and communication technologies (ICT) have become pervasive in all areas of modern life, both in work and leisure. Technological developments and particularly the ubiquity of smartphones have made it possible for ICT consumers to be constantly connected to work, fostering an always-on mentality and increasing the pressure to be accessible at all times. However, performing work tasks outside of working hours using ICT results in a lack of mental detachment and recovery from work. It is, therefore, necessary to develop effective behavioral interventions to increase risk awareness of a constant connection to the workplace in the employed population. Drawing on regulatory focus theory, this study aims to investigate the persuasiveness of tailoring messages to individuals’ chronic regulatory focus in order to encourage ICT consumers to set boundaries by defining fixed times for professional accessibility outside of working hours in order to contribute to the well-being of ICT consumers with high ICT involvement in their work life. The experimental study examines the interaction effect between consumers’ chronic regulatory focus (i.e. promotion focus versus prevention focus) and positive or negative message framing (i.e. gain frame versus loss frame) on consumers’ intention to perform the advocated behavior. Based on the assumption that congruent messages create regulatory fit and increase message effectiveness, it is hypothesized that behavioral intention will be higher in the condition of regulatory fit compared to regulatory non-fit. It is further hypothesized that ad believability and attitude toward the ad will mediate the effect of regulatory fit on behavioral intention given that ad believability and ad attitude both determine consumer behavioral responses. Results confirm that the interaction between regulatory focus and message frame emerged as a predictor of behavioral intention such as that consumers’ intentions to set boundaries by defining fixed times for professional accessibility outside of working hours increased as congruency with their regulatory focus increased. The loss-framed ad was more effective for consumers with a predominant prevention focus, while the gain-framed ad was more effective for consumers with a predominant promotion focus. Ad believability and attitude toward the ad both emerged as predictors of behavioral intention. Mediation analysis revealed that the direct effect of the interaction between regulatory focus and message frame on behavioral intention was no longer significant when including ad believability and ad attitude as mediators in the model, indicating full mediation. However, while the indirect effect through ad believability was significant, the indirect effect through attitude toward the ad was not significant. Hence, regulatory fit increased ad believability, which then increased behavioral intention. Ad believability appears to have a superior effect indicating that behavioral intention does not depend on attitude toward the ad, but it depends on whether or not the ad is perceived as believable. The study shows that the principle of regulatory fit holds true in the context of ICT consumption and responds to calls for more research on mediators of health message framing effects.Keywords: always-on mentality, Information and communication technologies (ICT) consumption, message framing, regulatory focus
Procedia PDF Downloads 210592 Scaling up Small and Sick Newborn Care Through the Establishment of the First Human Milk Bank in Nepal
Authors: Prajwal Paudel, Shreeprasad Adhikari, Shailendra Bir Karmacharya, Kalpana Upadhyaya
Abstract:
Background: Human milk banks have been recommended by the World Health Organization (WHO) for newborn and child nourishment in the provision of optimum nutrition as an alternative to breastfeeding in circumstances when direct breastfeeding is inaccessible. The vulnerable group of babies, mainly preterm, low birth weight, and sick newborns, are at a greater risk of mortality and possibly benefit from the safe use of donated human milk through milk banks. In this study, we aimed to shed light on the process involved during the setting up of the nation’s first milk bank and its vitality in small and sick newborn nutrition and care. Methods: The study was conducted in Paropakar Maternity and Women’s Hospital, where the first human milk (HMB) was established. The establishment involved a stepwise process of need assessment meeting, formation of the HMB committee, learning visit to HMB in India, studying the strengths and weaknesses of promoting breastfeeding and HMB system integration, procurement, installation, and setting up the infrastructure, and developing technical competency, launching of the HMB. After the initiation of HMB services, information regarding the recruited donor mothers and the volume of milk pasteurized and consumed by the needy recipient babies were recorded. Descriptive statistics with frequencies and percentages were used to describe the utilization of HMB services. Results: During the study period, a total of 506113 ml of milk was collected, while 49930 ml of milk was pasteurized. Of the pasteurized milk, 381248 ml of milk was dispensed. The total volume of milk received was from a total of 883 after proper routine screening tests. Similarly, the total number of babies who received the donated human milk (DHM) was 912 with different neonatal conditions. Among the babies who received DHM, 527(57.7%) were born via CS, and 385 (42.21%) were delivered normally. In the birth weight category,9 (1%) of the babies were less than 1000 grams, 75 (8.2%) were less than 1500 grams, 405 (44.4%) were between 1500 to less than 2500 grams whereas, 423 (46.4%) of the babies who received DHM were normal weight babies. Among the sick newborns, perinatal asphyxia accounted for 166 (18.2%), preterm with other complications 372 (40.7%), preterm 23 (2.02%), respiratory distress 140 (15.35%), neonatal jaundice 150 (16.44%), sepsis 94 (10.30%), meconium aspiration syndrome 9(1%), seizure disorder 28 (3.07%), congenital anomalies 13 (1.42%) and others 33(3. 61%). The neonatal mortality rate dropped to 6.2/1000 live births from 7.5/1000 live births in the first year of establishment as compared to the previous year. Conclusion: The establishment of the first HMB in Nepal involved a comprehensive approach to integrate a new system with the existing newborn care in the provision of safe DHM. Premature babies with complication, babies born via CS, perinatal asphyxia and babies with sepsis consumed the greater proportion of DHM. Rigorous research is warranted to assess the impact of DHM in small and sick newborn who otherwise would be fed formula milk.Keywords: human milk bank, sick-newborn, mortality, neonatal nutrition
Procedia PDF Downloads 11591 Soil Matric Potential Based Irrigation in Rice: A Solution to Water Scarcity
Authors: S. N. C. M. Dias, Niels Schuetze, Franz Lennartz
Abstract:
The current focus in irrigated agriculture will move from maximizing crop production per unit area towards maximizing the crop production per unit amount of water (water productivity) used. At the same time, inadequate water supply or deficit irrigation will be the only solution to cope with water scarcity in the near future. Soil matric potential based irrigation plays an important role in such deficit irrigated agriculture to grow any crop including rice. Rice as the staple food for more than half of the world population, grows mainly under flooded conditions. It requires more water compared to other upland cereals. A major amount of this water is used in the land preparation and is lost at field level due to evaporation, deep percolation, and seepage. A field experimental study was conducted in the experimental premises of rice research and development institute of Sri Lanka in Kurunegala district to estimate the water productivity of rice under deficit irrigation. This paper presents the feasibility of improving current irrigation management in rice cultivation under water scarce conditions. The experiment was laid out in a randomized complete block design with four different irrigation treatments with three replicates. Irrigation treatments were based on soil matric potential threshold values. Treatment W0 was maintained between 60-80mbars. W1 was maintained between 80-100mbars. Other two dry treatments W2 and W3 were maintained at 100-120 mbar and 120 -140 mbar respectively. The sprinkler system was used to irrigate each plot individually upon reaching the maximum threshold value in respective treatment. Treatments were imposed two weeks after seed establishment and continued until two weeks before physiological maturity. Fertilizer applications, weed management, and other management practices were carried out per the local recommendations. Weekly plant growth measurements, daily climate parameters, soil parameters, soil tension values, and water content were measured throughout the growing period. Highest plant growth and grain yield (5.61t/ha) were observed in treatment W2 followed by W0, W1, and W3 in comparison to the reference yield (5.23t/ha) of flooded rice grown in the study area. Water productivity was highest in W3. Concerning the irrigation water savings, grain yield, and water productivity together, W2 showed the better performance. Rice grown under unsaturated conditions (W2) shows better performance compared to the continuously saturated conditions(W0). In conclusion, soil matric potential based irrigation is a promising practice in irrigation management in rice. Higher irrigation water savings can be achieved in this method. This strategy can be applied to a wide range of locations under different climates and soils. In future studies, higher soil matric potential values can be applied to evaluate the maximum possible values for rice to get higher water savings at minimum yield losses.Keywords: irrigation, matric potential, rice, water scarcity
Procedia PDF Downloads 198590 Emphasizing Sumak Kawsay in Peace Ethics
Authors: Lisa Tragbar
Abstract:
Since the Rio declaration, the agreement resulting from the Earth Summit in 1992, the UN member states acknowledge that peace and environmental protection are deeply linked to each other. It has also been made clear by Contemporary Peace research since the early 2000 that the lack of natural resources increases conflicts, as well as potential war conflicts (general environmental conflict thesis). I argue that peace ethics need to reconsider the role of the environment in peace ethics, from conflict prevention to peacebuilding. Sumak kawsay is a concept that offers a non-anthropocentric perspective on the subject. Several Contemporary Peace Ethicists don’t take environmental peace sufficiently into account. 1. The Peace theorist Johan Galtung famously argues that positive peace depends mostly on social, economic and political factors, as institutional structures establish peace. Galtung has a relational approach to peace, yet only between human interactors. 2. Michael Fox claims in his anti-war argument to consider nonhuman entities in conflicts. Because of their species interrelation, humans cannot decide on the fate of other species. 3. Although Mark Woods considers himself a peace ecologist, following Reichberg and Syse, and argues from a duty-based perspective towards nature, he mostly focuses on the protection of the environment during war conflicts. I want to focus on a non-anthropocentric view to argue that the environment is an entity of human concern in order to construct peace. Based on the premises that the lack of natural resources create tensions that play a significant part in international conflicts and these conflicts are potential war conflicts, I argue that a non-anthropocentric account to peace ethics is an indispensable perspective towards the recovery of these resources and therefore the reduction of war conflicts. Sumak kawsay is an approach contributing to a peaceful environment, which can play a crucial role in international peacekeeping operations. To emphasize sumak kawsay in peace ethics, it is necessary to explain what this principle includes and how it renews Contemporary Peace ethics. The indigenous philosophy of life of the Andean Quechua philosophy in Ecuador and varities from other countries from the Global South include a holistic real-world vision that contains concepts like the de-hierarchization of humans and nature as well as the reciprocity principle towards nature. Sumak kawsay represents the idea of the intrinsic value of nature and an egalitarian way of life and interconnectedness between human and nonhuman entities, which has been widely neglected in Traditional War and Peace Ethics. If sumak kawsay is transferred to peacekeeping practices, peacekeepers have restorative duties not only towards humans, but also towards nature. Resource conservation and environmental protection are the first step towards a positive peace. By recognising that healthy natural resources contribute to peacebuilding, by restoring balance through compensatory justice practices like recovery, by fostering dialogue between peacekeeping forces and by entitling ecosystems with rights natural resources and environmental conflicts are more unlikely to happen. This holistic approach pays nature sufficient attention and can contribute to a positive peace.Keywords: environment, natural resources, peace, Sumak Kawsay
Procedia PDF Downloads 77589 Integrated Care on Chronic Diseases in Asia-Pacific Countries
Authors: Chang Liu, Hanwen Zhang, Vikash Sharma, Don Eliseo Lucerno-Prisno III, Emmanuel Yujuico, Maulik Chokshi, Prashanthi Krishnakumar, Bach Xuan Tran, Giang Thu Vu, Kamilla Anna Pinter, Shenglan Tang
Abstract:
Background and Aims: Globally, many health systems focus on hospital-based healthcare models targeting acute care and disease treatment, which are not effective in addressing the challenges of ageing populations, chronic conditions, multi-morbidities, and increasingly unhealthy lifestyles. Recently, integrated care programs on chronic diseases have been developed, piloted, and implemented to meet such challenges. However, integrated care programs in the Asia-Pacific region vary in the levels of integration from linkage to coordination to full integration. This study aims to identify and analyze existing cases of integrated care in the Asia-Pacific region and identify the facilitators and barriers in order to improve existing cases and inform future cases. Methods: The study is a comparative study, with a combination approach of desk-based research and key informant interviews. The selected countries included in this study represent a good mix of lower-middle income countries (the Philippines, India, Vietnam, and Fiji), upper-middle income country (China), and high-income country (Singapore) in the Asia-Pacific region. Existing integrated care programs were identified through the scoping review approach. Trigger, history, general design, beneficiaries, and objectors were summarized with barriers and facilitators of integrated care based on key informant interviews. Representative case(s) in each country were selected and comprehensively analyzed through deep-dive case studies. Results: A total of 87 existing integrated care programs on chronic diseases were found in all countries, with 44 in China, 21 in Singapore, 12 in India, 5 in Vietnam, 4 in the Philippines, and 1 in Fiji. 9 representative cases of integrated care were selected for in-depth description and analysis, with 2 in China, the Philippines, and Vietnam, and 1 in Singapore, India, and Fiji. Population aging and the rising chronic disease burden have been identified as key drivers for almost all the six countries. Among the six countries, Singapore has the longest history of integrated care, followed by Fiji, the Philippines, and China, while India and Vietnam have a shorter history of integrated care. Incentives, technologies, education, and performance evaluation would be crucial for developing strategies for implementing future programs and improve already existing programs. Conclusion: Integrated care is important for addressing challenges surrounding the delivery of long-term care. To date, there is an increasing trend of integrated care programs on chronic diseases in the Asia-Pacific region, and all six countries in our study set integrated care as a direction for their health systems transformation.Keywords: integrated healthcare, integrated care delivery, chronic diseases, Asia-Pacific region
Procedia PDF Downloads 135588 Partially Aminated Polyacrylamide Hydrogel: A Novel Approach for Temporary Oil and Gas Well Abandonment
Authors: Hamed Movahedi, Nicolas Bovet, Henning Friis Poulsen
Abstract:
Following the advent of the Industrial Revolution, there has been a significant increase in the extraction and utilization of hydrocarbon and fossil fuel resources. However, a new era has emerged, characterized by a shift towards sustainable practices, namely the reduction of carbon emissions and the promotion of renewable energy generation. Given the substantial number of mature oil and gas wells that have been developed inside the petroleum reservoir domain, it is imperative to establish an environmental strategy and adopt appropriate measures to effectively seal and decommission these wells. In general, the cement plug serves as a material for plugging purposes. Nevertheless, there exist some scenarios in which the durability of such a plug is compromised, leading to the potential escape of hydrocarbons via fissures and fractures within cement plugs. Furthermore, cement is often not considered a practical solution for temporary plugging, particularly in the case of well sites that have the potential for future gas storage or CO2 injection. The Danish oil and gas industry has promising potential as a prospective candidate for future carbon dioxide (CO2) injection, hence contributing to the implementation of carbon capture strategies within Europe. The primary reservoir component consists of chalk, a rock characterized by limited permeability. This work focuses on the development and characterization of a novel hydrogel variant. The hydrogel is designed to be injected via a low-permeability reservoir and afterward undergoes a transformation into a high-viscosity gel. The primary objective of this research is to explore the potential of this hydrogel as a new solution for effectively plugging well flow. Initially, the synthesis of polyacrylamide was carried out using radical polymerization inside the confines of the reaction flask. Subsequently, with the application of the Hoffman rearrangement, the polymer chain undergoes partial amination, facilitating its subsequent reaction with the crosslinker and enabling the formation of a hydrogel in the subsequent stage. The organic crosslinker, glutaraldehyde, was employed in the experiment to facilitate the formation of a gel. This gel formation occurred when the polymeric solution was subjected to heat within a specified range of reservoir temperatures. Additionally, a rheological survey and gel time measurements were conducted on several polymeric solutions to determine the optimal concentration. The findings indicate that the gel duration is contingent upon the starting concentration and exhibits a range of 4 to 20 hours, hence allowing for manipulation to accommodate diverse injection strategies. Moreover, the findings indicate that the gel may be generated in environments characterized by acidity and high salinity. This property ensures the suitability of this substance for application in challenging reservoir conditions. The rheological investigation indicates that the polymeric solution exhibits the characteristics of a Herschel-Bulkley fluid with somewhat elevated yield stress prior to solidification.Keywords: polyacrylamide, hofmann rearrangement, rheology, gel time
Procedia PDF Downloads 77587 Identifying Confirmed Resemblances in Problem-Solving Engineering, Both in the Past and Present
Authors: Colin Schmidt, Adrien Lecossier, Pascal Crubleau, Philippe Blanchard, Simon Richir
Abstract:
Introduction:The widespread availability of artificial intelligence, exemplified by Generative Pre-trained Transformers (GPT) relying on large language models (LLM), has caused a seismic shift in the realm of knowledge. Everyone now has the capacity to swiftly learn how these models can either serve them well or not. Today, conversational AI like ChatGPT is grounded in neural transformer models, a significant advance in natural language processing facilitated by the emergence of renowned LLMs constructed using neural transformer architecture. Inventiveness of an LLM : OpenAI's GPT-3 stands as a premier LLM, capable of handling a broad spectrum of natural language processing tasks without requiring fine-tuning, reliably producing text that reads as if authored by humans. However, even with an understanding of how LLMs respond to questions asked, there may be lurking behind OpenAI’s seemingly endless responses an inventive model yet to be uncovered. There may be some unforeseen reasoning emerging from the interconnection of neural networks here. Just as a Soviet researcher in the 1940s questioned the existence of Common factors in inventions, enabling an Under standing of how and according to what principles humans create them, it is equally legitimate today to explore whether solutions provided by LLMs to complex problems also share common denominators. Theory of Inventive Problem Solving (TRIZ) : We will revisit some fundamentals of TRIZ and how Genrich ALTSHULLER was inspired by the idea that inventions and innovations are essential means to solve societal problems. It's crucial to note that traditional problem-solving methods often fall short in discovering innovative solutions. The design team is frequently hampered by psychological barriers stemming from confinement within a highly specialized knowledge domain that is difficult to question. We presume ChatGPT Utilizes TRIZ 40. Hence, the objective of this research is to decipher the inventive model of LLMs, particularly that of ChatGPT, through a comparative study. This will enhance the efficiency of sustainable innovation processes and shed light on how the construction of a solution to a complex problem was devised. Description of the Experimental Protocol : To confirm or reject our main hypothesis that is to determine whether ChatGPT uses TRIZ, we will follow a stringent protocol that we will detail, drawing on insights from a panel of two TRIZ experts. Conclusion and Future Directions : In this endeavor, we sought to comprehend how an LLM like GPT addresses complex challenges. Our goal was to analyze the inventive model of responses provided by an LLM, specifically ChatGPT, by comparing it to an existing standard model: TRIZ 40. Of course, problem solving is our main focus in our endeavours.Keywords: artificial intelligence, Triz, ChatGPT, inventiveness, problem-solving
Procedia PDF Downloads 74586 Digital Subsistence of Cultural Heritage: Digital Media as a New Dimension of Cultural Ecology
Authors: Dan Luo
Abstract:
With the climate change can exacerbate exposure of cultural heritage to climatic stressors, scholars pin their hope on digital technology can help the site avoid surprises. Virtual museum has been regarded as a highly effective technology that enables people to gain enjoyable visiting experience and immersive information about cultural heritage. The technology clearly reproduces the images of the tangible cultural heritage, and the aesthetic experience created by new media helps consumers escape from the realistic environment full of uncertainty. The new cultural anchor has appeared outside the cultural sites. This article synthesizes the international literature on the virtual museum by developing diagrams of Citespace focusing on the tangible cultural heritage and the alarmingly situation has emerged in the process of resolving climate change: (1) Digital collections are the different cultural assets for public. (2) The media ecology change people ways of thinking and meeting style of cultural heritage. (3) Cultural heritage may live forever in the digital world. This article provides a typical practice information to manage cultural heritage in a changing climate—the Dunhuang Mogao Grottoes in the far northwest of China, which is a worldwide cultural heritage site famous for its remarkable and sumptuous murals. This monument is a typical synthesis of art containing 735 Buddhist temples, which was listed by UNESCO as one of the World Cultural Heritage sites. The caves contain some extraordinary examples of Buddhist art spanning a period of 1,000 years - the architectural form, the sculptures in the caves, and the murals on the walls, all together constitute a wonderful aesthetic experience. Unfortunately, this magnificent treasure cave has been threatened by increasingly frequent dust storms and precipitation. The Dunhuang Academy has been using digital technology since the last century to preserve these immovable cultural heritages, especially the murals in the caves. And then, Dunhuang culture has become a new media culture after introduce the art to the world audience through exhibitions, VR, video, etc. The paper chooses qualitative research method that used Nvivo software to encode the collected material to answer this question. The author paid close attention to the survey in Dunhuang City, including participated in 10 exhibition and 20 salons that are Dunhuang-themed on network. What’s more, 308 visitors were interviewed who are fans of the art and have experienced Dunhuang culture online(6-75 years).These interviewees have been exposed to Dunhuang culture through different media, and they are acutely aware of the threat to this cultural heritage. The conclusion is that the unique halo of the cultural heritage was always emphasized, and digital media breeds twin brothers of cultural heritage. In addition, the digital media make it possible for cultural heritage to reintegrate into the daily life of the masses. Visitors gain the opportunity to imitate the mural figures through enlarged or emphasized images but also lose the perspective of understanding the whole cultural life. New media construct a new life aesthetics apart from the Authorized heritage discourse.Keywords: cultural ecology, digital twins, life aesthetics, media
Procedia PDF Downloads 81585 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete
Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml
Abstract:
Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic
Procedia PDF Downloads 157584 Identification of Electric Energy Storage Acceptance Types: Empirical Findings from the German Manufacturing Industry
Authors: Dominik Halstrup, Marlene Schriever
Abstract:
The industry, as one of the main energy consumer, is of critical importance along the way of transforming the energy system to Renewable Energies. The distributed character of the Energy Transition demands for further flexibility being introduced to the grid. In order to shed further light on the acceptance of Electric Energy Storage (ESS) from an industrial point of view, this study therefore examines the German manufacturing industry. The analysis in this paper uses data composed of a survey amongst 101 manufacturing companies in Germany. Being part of a two-stage research design, both qualitative and quantitative data was collected. Based on a literature review an acceptance concept was developed in the paper and four user-types identified: (Dedicated) User, Impeded User, Forced User and (Dedicated) Non-User and incorporated in the questionnaire. Both descriptive and bivariate analysis is deployed to identify the level of acceptance in the different organizations. After a factor analysis has been conducted, variables were grouped to form independent acceptance factors. Out of the 22 organizations that do show a positive attitude towards ESS, 5 have already implemented ESS and show a positive attitude towards ESS. They can be therefore considered ‘Dedicated Users’. The remaining 17 organizations have a positive attitude but have not implemented ESS yet. The results suggest that profitability plays an important role as well as load-management systems that are already in place. Surprisingly, 2 organizations have implemented ESS even though they have a negative attitude towards it. This is an example for a ‘Forced User’ where reasons of overriding importance or supporters with overriding authority might have forced the company to implement ESS. By far the biggest subset of the sample shows (critical) distance and can therefore be considered ‘(Dedicated) Non-Users’. The results indicate that the majority of the respondents have not thought ESS in their own organization through yet. For the majority of the sample one can therefore not speak of critical distance but rather a distance due to insufficient information and the perceived unprofitability. This paper identifies the relative state of acceptance of ESS in the manufacturing industry as well as current reasons for hindrance and perspectives for future growth of ESS in an industrial setting from a policy level. The interest that is currently generated by the media could be channeled and taken into a more substantial and individual discussion about ESS in an industrial setting. If the current perception of profitability could be addressed and communicated accordingly, ESS and their use in for instance cooperative business models could become a topic for more organizations in Germany and other parts of the world. As price mechanisms tend to favor existing technologies, policy makers need to further access the use of ESS and acknowledge the positive effects when integrated in an energy system. The subfields of generation, transmission and distribution become increasingly intertwined. New technologies and business models, such as ESS or cooperative arrangements entering the market, increase the number of stakeholders. Organizations need to find their place within this array of stakeholders.Keywords: acceptance, energy storage solutions, German energy transition, manufacturing industry
Procedia PDF Downloads 225583 Laboratory Assessment of Electrical Vertical Drains in Composite Soils Using Kaolin and Bentonite Clays
Authors: Maher Z. Mohammed, Barry G. Clarke
Abstract:
As an alternative to stone column in fine grained soils, it is possible to create stiffened columns of soils using electroosmosis (electroosmotic piles). This program of this research is to establish the effectiveness and efficiency of the process in different soils. The aim of this study is to assess the capability of electroosmosis treatment in a range of composite soils. The combined electroosmotic and preloading equipment developed by Nizar and Clarke (2013) was used with an octagonal array of anodes surrounding a single cathode in a nominal 250mm diameter 300mm deep cylinder of soil and 80mm anode to cathode distance. Copper coiled springs were used as electrodes to allow the soil to consolidate either due to an external vertical applied load or electroosmosis. The equipment was modified to allow the temperature to be monitored during the test. Electroosmotic tests were performed on China Clay Grade E kaolin and calcium bentonite (Bentonex CB) mixed with sand fraction C (BS 1881 part 131) at different ratios by weight; (0, 23, 33, 50 and 67%) subjected to applied voltages (5, 10, 15 and 20). The soil slurry was prepared by mixing the dry soil with water to 1.5 times the liquid limit of the soil mixture. The mineralogical and geotechnical properties of the tested soils were measured before the electroosmosis treatment began. In the electroosmosis cell tests, the settlement, expelled water, variation of electrical current and applied voltage, and the generated heat was monitored during the test time for 24 osmotic tests. Water content was measured at the end of each test. The electroosmotic tests are divided into three phases. In Phase 1, 15 kPa was applied to simulate a working platform and produce a uniform soil which had been deposited as a slurry. 50 kPa was used in Phase 3 to simulate a surcharge load. The electroosmotic treatment was only performed during Phase 2 where a constant voltage was applied through the electrodes in addition to the 15 kPa pressure. This phase was stopped when no further water was expelled from the cell, indicating the electroosmotic process had stopped due to either the degradation of the anode or the flow due to the hydraulic gradient exactly balanced the electroosmotic flow resulting in no flow. Control tests for each soil mixture were carried out to assess the behaviour of the soil samples subjected to only an increase of vertical pressure, which is 15kPa in Phase 1 and 50kPa in Phase 3. Analysis of the experimental results from this study showed a significant dewatering effect on the soil slurries. The water discharged by the electroosmotic treatment process decreased as the sand content increased. Soil temperature increased significantly when electrical power was applied and drops when applied DC power turned off or when the electrode degraded. The highest increase in temperature was found in pure clays at higher applied voltage after about 8 hours of electroosmosis test.Keywords: electrokinetic treatment, electrical conductivity, electroosmotic consolidation, electroosmosis permeability ratio
Procedia PDF Downloads 166582 Telomerase, a Biomarker in Oral Cancer Cell Proliferation and Tool for Its Prevention at Initial Stage
Authors: Shaista Suhail
Abstract:
As cancer populations is increasing sharply, the incidence of oral squamous cell carcinoma (OSCC) has also been expected to increase. Oral carcinogenesis is a highly complex, multistep process which involves accumulation of genetic alterations that lead to the induction of proteins promoting cell growth (encoded by oncogenes), increased enzymatic (telomerase) activity promoting cancer cell proliferation. The global increase in frequency and mortality, as well as the poor prognosis of oral squamous cell carcinoma, has intensified current research efforts in the field of prevention and early detection of this disease. The advances in the understanding of the molecular basis of oral cancer should help in the identification of new markers. The study of the carcinogenic process of the oral cancer, including continued analysis of new genetic alterations, along with their temporal sequencing during initiation, promotion and progression, will allow us to identify new diagnostic and prognostic factors, which will provide a promising basis for the application of more rational and efficient treatments. Telomerase activity has been readily found in most cancer biopsies, in premalignant lesions or germ cells. Activity of telomerase is generally absent in normal tissues. It is known to be induced upon immortalization or malignant transformation of human cells such as in oral cancer cells. Maintenance of telomeres plays an essential role during transformation of precancer to malignant stage. Mammalian telomeres, a specialized nucleoprotein structures are composed of large conctamers of the guanine-rich sequence 5_-TTAGGG-3_. The roles of telomeres in regulating both stability of genome and replicative immortality seem to contribute in essential ways in cancer initiation and progression. It is concluded that activity of telomerase can be used as a biomarker for diagnosis of malignant oral cancer and a target for inactivation in chemotherapy or gene therapy. Its expression will also prove to be an important diagnostic tool as well as a novel target for cancer therapy. The activation of telomerase may be an important step in tumorgenesis which can be controlled by inactivating its activity during chemotherapy. The expression and activity of telomerase are indispensable for cancer development. There are no drugs which can effect extremely to treat oral cancers. There is a general call for new emerging drugs or methods that are highly effective towards cancer treatment, possess low toxicity, and have a minor environment impact. Some novel natural products also offer opportunities for innovation in drug discovery. Natural compounds isolated from medicinal plants, as rich sources of novel anticancer drugs, have been of increasing interest with some enzyme (telomerase) blockage property. The alarming reports of cancer cases increase the awareness amongst the clinicians and researchers pertaining to investigate newer drug with low toxicity.Keywords: oral carcinoma, telomere, telomerase, blockage
Procedia PDF Downloads 175581 Organic Light Emitting Devices Based on Low Symmetry Coordination Structured Lanthanide Complexes
Authors: Zubair Ahmed, Andrea Barbieri
Abstract:
The need to reduce energy consumption has prompted a considerable research effort for developing alternative energy-efficient lighting systems to replace conventional light sources (i.e., incandescent and fluorescent lamps). Organic light emitting device (OLED) technology offers the distinctive possibility to fabricate large area flat devices by vacuum or solution processing. Lanthanide β-diketonates complexes, due to unique photophysical properties of Ln(III) ions, have been explored as emitting layers in OLED displays and in solid-state lighting (SSL) in order to achieve high efficiency and color purity. For such applications, the excellent photoluminescence quantum yield (PLQY) and stability are the two key points that can be achieved simply by selecting the proper organic ligands around the Ln ion in a coordination sphere. Regarding the strategies to enhance the PLQY, the most common is the suppression of the radiationless deactivation pathways due to the presence of high-frequency oscillators (e.g., OH, –CH groups) around the Ln centre. Recently, a different approach to maximize the PLQY of Ln(β-DKs) has been proposed (named 'Escalate Coordination Anisotropy', ECA). It is based on the assumption that coordinating the Ln ion with different ligands will break the centrosymmetry of the molecule leading to less forbidden transitions (loosening the constraints of the Laporte rule). The OLEDs based on such complexes are available, but with low efficiency and stability. In order to get efficient devices, there is a need to develop some new Ln complexes with enhanced PLQYs and stabilities. For this purpose, the Ln complexes, both visible and (NIR) emitting, of variant coordination structures based on the various fluorinated/non-fluorinated β-diketones and O/N-donor neutral ligands were synthesized using a one step in situ method. In this method, the β-diketones, base, LnCl₃.nH₂O and neutral ligands were mixed in a 3:3:1:1 M ratio in ethanol that gave air and moisture stable complexes. Further, they were characterized by means of elemental analysis, NMR spectroscopy and single crystal X-ray diffraction. Thereafter, their photophysical properties were studied to select the best complexes for the fabrication of stable and efficient OLEDs. Finally, the OLEDs were fabricated and investigated using these complexes as emitting layers along with other organic layers like NPB,N,N′-Di(1-naphthyl)-N,N′-diphenyl-(1,1′-biphenyl)-4,4′-diamine (hole-transporting layer), BCP, 2,9-Dimethyl-4,7-diphenyl-1,10-phenanthroline (hole-blocker) and Alq3 (electron-transporting layer). The layers were sequentially deposited under high vacuum environment by thermal evaporation onto ITO glass substrates. Moreover, co-deposition techniques were used to improve charge transport in the devices and to avoid quenching phenomena. The devices show strong electroluminescence at 612, 998, 1064 and 1534 nm corresponding to ⁵D₀ →⁷F₂(Eu), ²F₅/₂ → ²F₇/₂ (Yb), ⁴F₃/₂→ ⁴I₉/₂ (Nd) and ⁴I1₃/₂→ ⁴I1₅/₂ (Er). All the devices fabricated show good efficiency as well as stability.Keywords: electroluminescence, lanthanides, paramagnetic NMR, photoluminescence
Procedia PDF Downloads 121580 Green Space and Their Possibilities of Enhancing Urban Life in Dhaka City, Bangladesh
Authors: Ummeh Saika, Toshio Kikuchi
Abstract:
Population growth and urbanization is a global phenomenon. As the rapid progress of technology, many cities in the international community are facing serious problems of urbanization. There is no doubt that the urbanization will proceed to have significant impact on the ecology, economy and society at local, regional, and global levels. The inhabitants of Dhaka city suffer from lack of proper urban facilities. The green spaces are needed for different functional and leisure activities of the urban dwellers. Again growing densification, a number of green space are transferred into open space in the Dhaka city. As a result greenery of the city's decreases gradually. Moreover, the existing green space is frequently threatened by encroachment. The role of green space, both at community and city level, is important to improve the natural environment and social ties for future generations. Therefore, it seems that the green space needs to be more effective for public interaction. The main objective of this study is to address the effectiveness of urban green space (Urban Park) of Dhaka City. Two approaches are selected to fulfill the study. Firstly, analyze the long-term spatial changes of urban green space using GIS and secondly, investigate the relationship of urban park network with physical and social environment. The case study site covers eight urban parks of Dhaka metropolitan area of Bangladesh. Two aspects (Physical and Social) are applied for this study. For physical aspect, satellite images and aerial photos of different years are used to find out the changes of urban parks. And for social aspect, methods are used as questionnaire survey, interview, observation, photographs, sketch and previous information of parks to analyze about the social environment of parks. After calculation of all data by descriptive statistics, result is shown by maps using GIS. According to physical size, parks of Dhaka city are classified into four types: Small, Medium, Large and Extra Large parks. The observed result showed that the physical and social environment of urban parks varies with their size. In small size parks physical environment is moderate by newly tree plantation and area expansion. However, in medium size parks physical environment are poor, example- tree decrease, exposed soil increase. On the other hand, physical environment of large size and extra large size parks are in good condition, because of plenty of vegetation and well management. Again based on social environment, in small size parks people mainly come from surroundings area and mainly used as waiting place. In medium-size parks, people come to attend various occasion from different places. In large size and extra large size parks, people come from every part of the city area for tourism purpose. Urban parks are important source of green space. Its influence both physical and social environment of urban area. Nowadays green space area gradually decreases and transfer into open space. The consequence of this research reveals that changes of urban parks influence both physical and social environment and also impact on urban life.Keywords: physical environment, social environment, urban life, urban parks
Procedia PDF Downloads 429579 The Incidental Linguistic Information Processing and Its Relation to General Intellectual Abilities
Authors: Evgeniya V. Gavrilova, Sofya S. Belova
Abstract:
The present study was aimed at clarifying the relationship between general intellectual abilities and efficiency in free recall and rhymed words generation task after incidental exposure to linguistic stimuli. The theoretical frameworks stress that general intellectual abilities are based on intentional mental strategies. In this context, it seems to be crucial to examine the efficiency of incidentally presented information processing in cognitive task and its relation to general intellectual abilities. The sample consisted of 32 Russian students. Participants were exposed to pairs of words. Each pair consisted of two common nouns or two city names. Participants had to decide whether a city name was presented in each pair. Thus words’ semantics was processed intentionally. The city names were considered to be focal stimuli, whereas common nouns were considered to be peripheral stimuli. Along with that each pair of words could be rhymed or not be rhymed, but this phonemic aspect of stimuli’s characteristic (rhymed and non-rhymed words) was processed incidentally. Then participants were asked to produce as many rhymes as they could to new words. The stimuli presented earlier could be used as well. After that, participants had to retrieve all words presented earlier. In the end, verbal and non-verbal abilities were measured with number of special psychometric tests. As for free recall task intentionally processed focal stimuli had an advantage in recall compared to peripheral stimuli. In addition all the rhymed stimuli were recalled more effectively than non-rhymed ones. The inverse effect was found in words generation task where participants tended to use mainly peripheral stimuli compared to focal ones. Furthermore peripheral rhymed stimuli were most popular target category of stimuli that was used in this task. Thus the information that was processed incidentally had a supplemental influence on efficiency of stimuli processing as well in free recall as in word generation task. Different patterns of correlations between intellectual abilities and efficiency in different stimuli processing in both tasks were revealed. Non-verbal reasoning ability correlated positively with free recall of peripheral rhymed stimuli, but it was not related to performance on rhymed words’ generation task. Verbal reasoning ability correlated positively with free recall of focal stimuli. As for rhymed words generation task, verbal intelligence correlated negatively with generation of focal stimuli and correlated positively with generation of all peripheral stimuli. The present findings lead to two key conclusions. First, incidentally processed stimuli had an advantage in free recall and word generation task. Thus incidental information processing appeared to be crucial for subsequent cognitive performance. Secondly, it was demonstrated that incidentally processed stimuli were recalled more frequently by participants with high nonverbal reasoning ability and were more effectively used by participants with high verbal reasoning ability in subsequent cognitive tasks. That implies that general intellectual abilities could benefit from operating by different levels of information processing while cognitive problem solving. This research was supported by the “Grant of President of RF for young PhD scientists” (contract № is 14.Z56.17.2980- MK) and the Grant № 15-36-01348a2 of Russian Foundation for Humanities.Keywords: focal and peripheral stimuli, general intellectual abilities, incidental information processing
Procedia PDF Downloads 231578 Valorization of Surveillance Data and Assessment of the Sensitivity of a Surveillance System for an Infectious Disease Using a Capture-Recapture Model
Authors: Jean-Philippe Amat, Timothée Vergne, Aymeric Hans, Bénédicte Ferry, Pascal Hendrikx, Jackie Tapprest, Barbara Dufour, Agnès Leblond
Abstract:
The surveillance of infectious diseases is necessary to describe their occurrence and help the planning, implementation and evaluation of risk mitigation activities. However, the exact number of detected cases may remain unknown whether surveillance is based on serological tests because identifying seroconversion may be difficult. Moreover, incomplete detection of cases or outbreaks is a recurrent issue in the field of disease surveillance. This study addresses these two issues. Using a viral animal disease as an example (equine viral arteritis), the goals were to establish suitable rules for identifying seroconversion in order to estimate the number of cases and outbreaks detected by a surveillance system in France between 2006 and 2013, and to assess the sensitivity of this system by estimating the total number of outbreaks that occurred during this period (including unreported outbreaks) using a capture-recapture model. Data from horses which exhibited at least one positive result in serology using viral neutralization test between 2006 and 2013 were used for analysis (n=1,645). Data consisted of the annual antibody titers and the location of the subjects (towns). A consensus among multidisciplinary experts (specialists in the disease and its laboratory diagnosis, epidemiologists) was reached to consider seroconversion as a change in antibody titer from negative to at least 32 or as a three-fold or greater increase. The number of seroconversions was counted for each town and modeled using a unilist zero-truncated binomial (ZTB) capture-recapture model with R software. The binomial denominator was the number of horses tested in each infected town. Using the defined rules, 239 cases located in 177 towns (outbreaks) were identified from 2006 to 2013. Subsequently, the sensitivity of the surveillance system was estimated as the ratio of the number of detected outbreaks to the total number of outbreaks that occurred (including unreported outbreaks) estimated using the ZTB model. The total number of outbreaks was estimated at 215 (95% credible interval CrI95%: 195-249) and the surveillance sensitivity at 82% (CrI95%: 71-91). The rules proposed for identifying seroconversion may serve future research. Such rules, adjusted to the local environment, could conceivably be applied in other countries with surveillance programs dedicated to this disease. More generally, defining ad hoc algorithms for interpreting the antibody titer could be useful regarding other human and animal diseases and zoonosis when there is a lack of accurate information in the literature about the serological response in naturally infected subjects. This study shows how capture-recapture methods may help to estimate the sensitivity of an imperfect surveillance system and to valorize surveillance data. The sensitivity of the surveillance system of equine viral arteritis is relatively high and supports its relevance to prevent the disease spreading.Keywords: Bayesian inference, capture-recapture, epidemiology, equine viral arteritis, infectious disease, seroconversion, surveillance
Procedia PDF Downloads 298577 Lignin Valorization: Techno-Economic Analysis of Three Lignin Conversion Routes
Authors: Iris Vural Gursel, Andrea Ramirez
Abstract:
Effective utilization of lignin is an important mean for developing economically profitable biorefineries. Current literature suggests that large amounts of lignin will become available in second generation biorefineries. New conversion technologies will, therefore, be needed to carry lignin transformation well beyond combustion to produce energy, but towards high-value products such as chemicals and transportation fuels. In recent years, significant progress on catalysis has been made to improve transformation of lignin, and new catalytic processes are emerging. In this work, a techno-economic assessment of two of these novel conversion routes and comparison with more established lignin pyrolysis route were made. The aim is to provide insights into the potential performance and potential hotspots in order to guide the experimental research and ease the commercialization by early identifying cost drivers, strengths, and challenges. The lignin conversion routes selected for detailed assessment were: (non-catalytic) lignin pyrolysis as the benchmark, direct hydrodeoxygenation (HDO) of lignin and hydrothermal lignin depolymerisation. Products generated were mixed oxygenated aromatic monomers (MOAMON), light organics, heavy organics, and char. For the technical assessment, a basis design followed by process modelling in Aspen was done using experimental yields. A design capacity of 200 kt/year lignin feed was chosen that is equivalent to a 1 Mt/y scale lignocellulosic biorefinery. The downstream equipment was modelled to achieve the separation of the product streams defined. For determining external utility requirement, heat integration was considered and when possible gasses were combusted to cover heating demand. The models made were used in generating necessary data on material and energy flows. Next, an economic assessment was carried out by estimating operating and capital costs. Return on investment (ROI) and payback period (PBP) were used as indicators. The results of the process modelling indicate that series of separation steps are required. The downstream processing was found especially demanding in the hydrothermal upgrading process due to the presence of significant amount of unconverted lignin (34%) and water. Also, external utility requirements were found to be high. Due to the complex separations, hydrothermal upgrading process showed the highest capital cost (50 M€ more than benchmark). Whereas operating costs were found the highest for the direct HDO process (20 M€/year more than benchmark) due to the use of hydrogen. Because of high yields to valuable heavy organics (32%) and MOAMON (24%), direct HDO process showed the highest ROI (12%) and the shortest PBP (5 years). This process is found feasible with a positive net present value. However, it is very sensitive to the prices used in the calculation. The assessments at this stage are associated with large uncertainties. Nevertheless, they are useful for comparing alternatives and identifying whether a certain process should be given further consideration. Among the three processes investigated here, the direct HDO process was seen to be the most promising.Keywords: biorefinery, economic assessment, lignin conversion, process design
Procedia PDF Downloads 261576 A Variational Reformulation for the Thermomechanically Coupled Behavior of Shape Memory Alloys
Authors: Elisa Boatti, Ulisse Stefanelli, Alessandro Reali, Ferdinando Auricchio
Abstract:
Thanks to their unusual properties, shape memory alloys (SMAs) are good candidates for advanced applications in a wide range of engineering fields, such as automotive, robotics, civil, biomedical, aerospace. In the last decades, the ever-growing interest for such materials has boosted several research studies aimed at modeling their complex nonlinear behavior in an effective and robust way. Since the constitutive response of SMAs is strongly thermomechanically coupled, the investigation of the non-isothermal evolution of the material must be taken into consideration. The present study considers an existing three-dimensional phenomenological model for SMAs, able to reproduce the main SMA properties while maintaining a simple user-friendly structure, and proposes a variational reformulation of the full non-isothermal version of the model. While the considered model has been thoroughly assessed in an isothermal setting, the proposed formulation allows to take into account the full nonisothermal problem. In particular, the reformulation is inspired to the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) formalism, and is based on a generalized gradient flow of the total entropy, related to thermal and mechanical variables. Such phrasing of the model is new and allows for a discussion of the model from both a theoretical and a numerical point of view. Moreover, it directly implies the dissipativity of the flow. A semi-implicit time-discrete scheme is also presented for the fully coupled thermomechanical system, and is proven unconditionally stable and convergent. The correspondent algorithm is then implemented, under a space-homogeneous temperature field assumption, and tested under different conditions. The core of the algorithm is composed of a mechanical subproblem and a thermal subproblem. The iterative scheme is solved by a generalized Newton method. Numerous uniaxial and biaxial tests are reported to assess the performance of the model and algorithm, including variable imposed strain, strain rate, heat exchange properties, and external temperature. In particular, the heat exchange with the environment is the only source of rate-dependency in the model. The reported curves clearly display the interdependence between phase transformation strain and material temperature. The full thermomechanical coupling allows to reproduce the exothermic and endothermic effects during respectively forward and backward phase transformation. The numerical tests have thus demonstrated that the model can appropriately reproduce the coupled SMA behavior in different loading conditions and rates. Moreover, the algorithm has proved effective and robust. Further developments are being considered, such as the extension of the formulation to the finite-strain setting and the study of the boundary value problem.Keywords: generalized gradient flow, GENERIC formalism, shape memory alloys, thermomechanical coupling
Procedia PDF Downloads 221575 Evaluation Of A Start Up Business Strategy In Movie Industry: Case Study Of Visinema
Authors: Stacia E. H. Sitohang, S.Mn., Socrates Rudy Sirait
Abstract:
The first movie theater in Indonesia was established in December 1900. The movie industry started with international movie penetration. After a while, local movie producers started to rise and created local Indonesian movies. The industry is growing through ups and downs in Indonesia. In 2008, Visinema was founded in Jakarta, Indonesia, by AnggaDwimasSasongko, one of the most respected movie director in Indonesia. After getting achievements and recognition, Visinema chose to grow the company horizontally as opposed to only grow vertically and gain another similar achievement. Visinemachose to build the ecosystem that enables them to obtain many more opportunities and generatebusiness sustainability. The company proceed as an agile company. They created several business subsidiaries to support the company’s Intellectual Property (IP) development. This research was done through interview with the key persons in the company and questionnaire to get market insights regarding Visinema. The is able to transform their IP that initially started from movies to different kinds of business model. Interestingly, Angga chose to use the start up approach to create Visinema. In 2019, the company successfully gained Series A funding from Intudo Ventures and got other various investment schemes to support the business. In early 2020, Covid-19 pandemic negatively impacted many industries in Indonesia, especially the entertainment and leisure businesses. Fortunately, Visinema did not face any significant problem regarding survival during the pandemic, there were nolay-offs nor work hour reductions. Instead, they were thinking of much bigger opportunities and problems. While other companies suffer during the pandemic, Visinema created the first focused Transactional Video On Demand (TVOD) in Indonesia named Bioskop Online. The platform was created to keep the company innovating and adapting with the new online market as the result of the Covid-19 pandemic. Other than a digital platform, Visinemainvested heavily in animation to target kids and family business. They believed that penetrating the technology and animation market is going to be the biggest opportunity in Visinema’s road map. Besides huge opportunities, Visinema is also facing problems. The first is company brand positioning. Angga, as the founder, felt the need to detach his name from the brand image of Visinema to create system sustainability and scalability. Second, the company has to create a strategy to refocus in a particular business area to maintain and improve the competitive advantages. The third problem, IP piracy is a huge structural problem in Indonesia, the company considers IP thieves as their biggest competitors as opposed to other production company. As the recommendation, we suggest a set of branding and management strategy to detach the founder’s name from Visinema’s brand and improve the competitive advantages. We also suggest Visinema invest in system building to prevent IP piracy in the entertainment industry, which later can be another business subsidiary of Visinema.Keywords: business ecosystem, agile, sustainability, scalability, start Up, intellectual property, digital platform
Procedia PDF Downloads 138574 Stakeholder-Driven Development of a One Health Platform to Prevent Non-Alimentary Zoonoses
Authors: A. F. G. Van Woezik, L. M. A. Braakman-Jansen, O. A. Kulyk, J. E. W. C. Van Gemert-Pijnen
Abstract:
Background: Zoonoses pose a serious threat to public health and economies worldwide, especially as antimicrobial resistance grows and newly emerging zoonoses can cause unpredictable outbreaks. In order to prevent and control emerging and re-emerging zoonoses, collaboration between veterinary, human health and public health domains is essential. In reality however, there is a lack of cooperation between these three disciplines and uncertainties exist about their tasks and responsibilities. The objective of this ongoing research project (ZonMw funded, 2014-2018) is to develop an online education and communication One Health platform, “eZoon”, for the general public and professionals working in veterinary, human health and public health domains to support the risk communication of non-alimentary zoonoses in the Netherlands. The main focus is on education and communication in times of outbreak as well as in daily non-outbreak situations. Methods: A participatory development approach was used in which stakeholders from veterinary, human health and public health domains participated. Key stakeholders were identified using business modeling techniques previously used for the design and implementation of antibiotic stewardship interventions and consisted of a literature scan, expert recommendations, and snowball sampling. We used a stakeholder salience approach to rank stakeholders according to their power, legitimacy, and urgency. Semi-structured interviews were conducted with stakeholders (N=20) from all three disciplines to identify current problems in risk communication and stakeholder values for the One Health platform. Interviews were transcribed verbatim and coded inductively by two researchers. Results: The following key values were identified (but were not limited to): (a) need for improved awareness of veterinary and human health of each other’s fields, (b) information exchange between veterinary and human health, in particularly at a regional level; (c) legal regulations need to match with daily practice; (d) professionals and general public need to be addressed separately using tailored language and information; (e) information needs to be of value to professionals (relevant, important, accurate, and have financial or other important consequences if ignored) in order to be picked up; and (f) need for accurate information from trustworthy, centrally organised sources to inform the general public. Conclusion: By applying a participatory development approach, we gained insights from multiple perspectives into the main problems of current risk communication strategies in the Netherlands and stakeholder values. Next, we will continue the iterative development of the One Health platform by presenting key values to stakeholders for validation and ranking, which will guide further development. We will develop a communication platform with a serious game in which professionals at the regional level will be trained in shared decision making in time-critical outbreak situations, a smart Question & Answer (Q&A) system for the general public tailored towards different user profiles, and social media to inform the general public adequately during outbreaks.Keywords: ehealth, one health, risk communication, stakeholder, zoonosis
Procedia PDF Downloads 286573 Analysis of the Potential of Biomass Residues for Energy Production and Applications in New Materials
Authors: Sibele A. F. Leite, Bernno S. Leite, José Vicente H. D´Angelo, Ana Teresa P. Dell’Isola, Julio CéSar Souza
Abstract:
The generation of bioenergy is one of the oldest and simplest biomass applications and is one of the safest options for minimizing emissions of greenhouse gasses and replace the use of fossil fuels. In addition, the increasing development of technologies for energy biomass conversion parallel to the advancement of research in biotechnology and engineering has enabled new opportunities for exploitation of biomass. Agricultural residues offer great potential for energy use, and Brazil is in a prominent position in the production and export of agricultural products such as banana and rice. Despite the economic importance of the growth prospects of these activities and the increasing of the agricultural waste, they are rarely explored for energy and production of new materials. Brazil products almost 10.5 million tons/year of rice husk and 26.8 million tons/year of banana stem. Thereby, the aim of this study was to analysis the potential of biomass residues for energy production and applications in new materials. Rice husk (specify the type) and banana stem (specify the type) were characterized by physicochemical analyses using the following parameters: organic carbon, nitrogen (NTK), proximate analyses, FT-IR spectroscopy, thermogravimetric analyses (TG), calorific values and silica content. Rice husk and banana stem presented attractive superior calorific (from 11.5 to 13.7MJ/kg), and they may be compared to vegetal coal (21.25 MJ/kg). These results are due to the high organic matter content. According to the proximate analysis, biomass has high carbon content (fixed and volatile) and low moisture and ash content. In addition, data obtained by Walkley–Black method point out that most of the carbon present in the rice husk (50.5 wt%) and in banana stalk (35.5 wt%) should be understood as organic carbon (readily oxidizable). Organic matter was also detected by Kjeldahl method which gives the values of nitrogen (especially on the organic form) for both residues: 3.8 and 4.7 g/kg of rice husk and banana stem respectively. TG and DSC analyses support the previous results, as they can provide information about the thermal stability of the samples allowing a correlation between thermal behavior and chemical composition. According to the thermogravimetric curves, there were two main stages of mass-losses. The first and smaller one occurred below 100 °C, which was suitable for water losses and the second event occurred between 200 and 500 °C which indicates decomposition of the organic matter. At this broad peak, the main loss was between 250-350 °C, and it is because of sugar decomposition (components readily oxidizable). Above 350 °C, mass loss of the biomass may be associated with lignin decomposition. Spectroscopic characterization just provided qualitative information about the organic matter, but spectra have shown absorption bands around 1030 cm-1 which may be identified as species containing silicon. This result is expected for the rice husk and deserves further investigation to the stalk of banana, as it can bring a different perspective for this biomass residue.Keywords: rice husk, banana stem, bioenergy, renewable feedstock
Procedia PDF Downloads 279572 The Study of Mirror Self-Recognition in Wildlife
Authors: Azwan Hamdan, Mohd Qayyum Ab Latip, Hasliza Abu Hassim, Tengku Rinalfi Putra Tengku Azizan, Hafandi Ahmad
Abstract:
Animal cognition provides some evidence for self-recognition, which is described as the ability to recognize oneself as an individual separate from the environment and other individuals. The mirror self-recognition (MSR) or mark test is a behavioral technique to determine whether an animal have the ability of self-recognition or self-awareness in front of the mirror. It also describes the capability for an animal to be aware of and make judgments about its new environment. Thus, the objectives of this study are to measure and to compare the ability of wild and captive wildlife in mirror self-recognition. Wild animals from the Royal Belum Rainforest Malaysia were identified based on the animal trails and salt lick grounds. Acrylic mirrors with wood frame (200 x 250cm) were located near to animal trails. Camera traps (Bushnell, UK) with motion-detection infrared sensor are placed near the animal trails or hiding spot. For captive wildlife, animals such as Malayan sun bear (Helarctos malayanus) and chimpanzee (Pan troglodytes) were selected from Zoo Negara Malaysia. The captive animals were also marked using odorless and non-toxic white paint on its forehead. An acrylic mirror with wood frame (200 x 250cm) and a video camera were placed near the cage. The behavioral data were analyzed using ethogram and classified through four stages of MSR; social responses, physical inspection, repetitive mirror-testing behavior and realization of seeing themselves. Results showed that wild animals such as barking deer (Muntiacus muntjak) and long-tailed macaque (Macaca fascicularis) increased their physical inspection (e.g inspecting the reflected image) and repetitive mirror-testing behavior (e.g rhythmic head and leg movement). This would suggest that the ability to use a mirror is most likely related to learning process and cognitive evolution in wild animals. However, the sun bear’s behaviors were inconsistent and did not clearly undergo four stages of MSR. This result suggests that when keeping Malayan sun bear in captivity, it may promote communication and familiarity between conspecific. Interestingly, chimp has positive social response (e.g manipulating lips) and physical inspection (e.g using hand to inspect part of the face) when they facing a mirror. However, both animals did not show any sign towards the mark due to lost of interest in the mark and realization that the mark is inconsequential. Overall, the results suggest that the capacity for MSR is the beginning of a developmental process of self-awareness and mental state attribution. In addition, our findings show that self-recognition may be based on different complex neurological and level of encephalization in animals. Thus, research on self-recognition in animals will have profound implications in understanding the cognitive ability of an animal as an effort to help animals, such as enhanced management, design of captive individuals’ enclosures and exhibits, and in programs to re-establish populations of endangered or threatened species.Keywords: mirror self-recognition (MSR), self-recognition, self-awareness, wildlife
Procedia PDF Downloads 273571 High Prevalence of Asymptomatic Dengue among Healthy Adults in Southern Malaysia: A Longitudinal Prospective Study
Authors: Nowrozy Jahan, Sharifah Syed Hassan, Daniel Reidpath
Abstract:
In recent decades, Malaysia has become a dengue hyper-endemic country with the co-circulation of the four-dengue virus (DENV) serotypes. The number of symptomatic dengue cases is maintaining an increasing trend since 1995 and sharply increased in 2014. The four DENV serotypes have been co-circulating since 2000, and this pattern of cyclical dominance of sub-types contributed to the development of frequent major dengue epidemics in Malaysia. Since 2012, different Malaysian state was dominated by different serotypes. The study aims to estimate the burden of asymptomatic dengue in a healthy adult population which may act as a potential source of further symptomatic dengue infection. It also aims to identify the predominant DENV serotypes which are circulating at the community level. A longitudinal prospective community-based study was conducted in the Segamat district of Johor State, southern part of Malaysia where the number of reported dengue cases has steadily increased over the last three years (2013-2015). More specifically, the study was conducted in and around of Kampung Abdullah of Sungai Segamat sub-district which was identified as a hot spot area over the period of 2013-2015. This community-based study has been conducted by Southeast Asia Community Observatory (SEACO), an ISO-certified research platform in collaboration of the Ministry of Health Malaysia and Monash University Malaysia. It was conducted from May 2015 to May 2016. In this study, 277 apparently looking healthy respondents joined who were followed up as a cohort for four times during the one-year study period. Blood was collected to detect the serological marker of dengue at each round of follow-up. Among 277, 184 respondents (66%) joined all four rounds. Half of the study respondents were at the age-group of 45-64 years, slightly more than half of the respondents (59%) were female, and the most (69%) of them were Malay; only 35% lived in urban areas. During the baseline, the study found a very high prevalence of exposure to dengue virus; 89% of the study respondents had serological evidence of previous asymptomatic dengue infection; the majority of them did not know about it as they did not develop any symptom of dengue fever; only 13% knew as they developed symptoms. At the end of the one-year study period, 19% of respondents developed recent secondary dengue infection which was also identified by the serological marker as they did not develop any symptom (asymptomatic cases). The asymptomatic dengue incidence was higher during the rainy season compared to the dry season. All four dengue serotypes were identified in the serum of the infected respondents; among them, DENV-2 was the most prominent. Further genetic analysis is going on to identify the association of HLA-B*46 and HLA-DRB1*08 with dengue resistance. This study provides evidence for the policymakers to be aware of asymptomatic dengue infection, to develop a useful tool for raising awareness about asymptomatic dengue infection among the general population, to monitor the community participation to strengthen the individual and community level dengue prevention and control measures when neither there is vaccine nor particular treatment for dengue.Keywords: asymptomatic, dengue, health adults, prospective study
Procedia PDF Downloads 130570 Wildlife Communities in the Service of Extensively Managed Fishpond Systems – Advantages of a Symbiotic Relationship
Authors: Peter Palasti, Eva Kerepeczki
Abstract:
Extensive fish farming is one of the most traditional forms of aquaculture in Europe, usually practiced in large pond systems with earthen beds, where the growth of fish is based on natural feed and supplementary foraging. These farms have semi-natural environmental conditions, sustaining diverse wildlife communities that have complex effects on fish production and also provide a livelihood for many wetland related taxa. Based on their characteristics, these communities could be sources of various ecosystem services (ESs), that could also enhance the value and enable the multifunctional use of these artificially constructed and maintained production zones. To identify and estimate the whole range of wildlife’s contribution we have conducted an integrated assessment in an extensively managed pond system in Biharugra, Hungary, where we studied 14 previously revealed ESs: fish and reed production, water storage, water and air quality regulation, CO2 absorption, groundwater recharge, aesthetics, recreational activities, inspiration, education, scientific research, presence of semi-natural habitats and useful/protected species. ESs were collected through structured interviews with the local experts of all major stakeholder groups, where we have also gathered information about the known forms, levels (none, low, high) and orientations (positive, negative) of the contributions of the wildlife community. After that, a quantitative analysis was carried out: we calculated the total mean value of the services being used between 2014-16, then we estimated the value and percentage of contributions. For the quantification, we mainly used biophysical indicators with the available data and empirical knowledge of the local experts. During the interviews, 12 of the previously listed services (85%) were mentioned to be related to wildlife community, consisting of 5 fully (e.g., recreation, reed production) and seven partially dependent ESs (e.g., inspiration, CO2 absorption) from our list. The orientation of the contributions was said to be positive almost every time; however, in the case of fish production, the feeding habit of some wild species (Phalacrocorax carbo, Lutra lutra) caused significant losses in fish stocks in the study period. During the biophysical assessment, we calculated the total mean value of the services and quantified the aid of wildlife community at the following services: fish and reed production, recreation, CO2 absorption, and the presence of semi-natural habitats and wild species. The combined results of our interviews and biophysical evaluations showed that the presence of wildlife community not just greatly increased the productivity of the fish farms in Biharugra (with ~53% of natural yield generated by planktonic and benthic communities) but also enhanced the multifunctionality of the system through expanding the quality and number of its services. With these abilities, extensively managed fishponds could play an important role in the future as refugia for wetland related services and species threatened by the effects of global warming.Keywords: ecosystem services, fishpond systems, integrated assessment, wildlife community
Procedia PDF Downloads 115