Search results for: location routing problem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9314

Search results for: location routing problem

254 Fermented Fruit and Vegetable Discard as a Source of Feeding Ingredients and Functional Additives

Authors: Jone Ibarruri, Mikel Manso, Marta Cebrián

Abstract:

A high amount of food is lost or discarded in the World every year. In addition, in the last decades, an increasing demand of new alternative and sustainable sources of proteins and other valuable compounds is being observed in the food and feeding sectors and, therefore, the use of food by-products as nutrients for these purposes sounds very interesting from the environmental and economical point of view. However, the direct use of discarded fruit and vegetables that present, in general, a low protein content is not interesting as feeding ingredient except if they are used as a source of fiber for ruminants. Especially in the case of aquaculture, several alternatives to the use of fish meal and other vegetable protein sources have been extensively explored due to the scarcity of fish stocks and the unsustainability of fishing for these purposes. Fish mortality is also of great concern in this sector as this problem highly reduces their economic feasibility. So, the development of new functional and natural ingredients that could reduce the need for vaccination is also of great interest. In this work, several fermentation tests were developed at lab scale using a selected mixture of fruit and vegetable discards from a wholesale market located in the Basque Country to increase their protein content and also to produce some bioactive extracts that could be used as additives in aquaculture. Fruit and vegetable mixtures (60/40 ww) were centrifugated for humidity reduction and crushed to 2-5 mm particle size. Samples were inoculated with a selected Rhizopus oryzae strain and fermented for 7 days in controlled conditions (humidity between 65 and 75% and 28ºC) in Petri plates (120 mm) by triplicate. Obtained results indicated that the final fermented product presented a twofold protein content (from 13 to 28% d.w). Fermented product was further processed to determine their possible functionality as a feed additive. Extraction tests were carried out to obtain an ethanolic extract (60:40 ethanol: water, v.v) and remaining biomass that also could present applications in food or feed sectors. The extract presented a polyphenol content of about 27 mg GAE/gr d.w with antioxidant activity of 8.4 mg TEAC/g d.w. Remining biomass is mainly composed of fiber (51%), protein (24%) and fat (10%). Extracts also presented antibacterial activity according to the results obtained in Agar Diffusion and to the Minimum Inhibitory Concentration (MIC) tests determined against several food and fish pathogen strains. In vitro, digestibility was also assessed to obtain preliminary information about the expected effect of extraction procedure on fermented product digestibility. First results indicated that remaining biomass after extraction doesn´t seem to improve digestibility in comparison to the initial fermented product. These preliminary results show that fermented fruit and vegetables can be a useful source of functional ingredients for aquaculture applications and a substitute of other protein sources in the feeding sector. Further validation will be also carried out through “in vivo” tests with trout and bass.

Keywords: fungal solid state fermentation, protein increase, functional extracts, feed ingredients

Procedia PDF Downloads 64
253 i-Plastic: Surface and Water Column Microplastics From the Coastal North Eastern Atlantic (Portugal)

Authors: Beatriz Rebocho, Elisabete Valente, Carla Palma, Andreia Guilherme, Filipa Bessa, Paula Sobral

Abstract:

The global accumulation of plastic in the oceans is a growing problem. Plastic is transported from its source to the oceans via rivers, which are considered the main route for plastic particles from land-based sources to the ocean. These plastics undergo physical and chemical degradation resulting in microplastics. The i-Plastic project aims to understand and predict the dispersion, accumulation and impacts of microplastics (5 mm to 1 µm) and nano plastics (below 1 µm) in marine environments from the tropical and temperate land-ocean interface to the open ocean under distinct flow and climate regimes. Seasonal monitoring of the fluxes of microplastics was carried out in (three) coastal areas in Brazil, Portugal and Spain. The present work shows the first results of in-situ seasonal monitoring and mapping of microplastics in ocean waters between Ovar and Vieira de Leiria (Portugal), in which 43 surface water samples and 43 water column samples were collected in contrasting seasons (spring and autumn). The spring and autumn surface water samples were collected with a 300 µm and 150 µm pore neuston net, respectively. In both campaigns, water column samples were collected using a conical mesh with a 150 µm pore. The experimental procedure comprises the following steps: i) sieving by a metal sieve; ii) digestion with potassium hydroxide to remove the organic matter original from the sample matrix. After a filtration step, the content is retained on a membrane and observed under a stereomicroscope, and physical and chemical characterization (type, color, size, and polymer composition) of the microparticles is performed. Results showed that 84% and 88% of the surface water and water column samples were contaminated with microplastics, respectively. Surface water samples collected during the spring campaign averaged 0.35 MP.m-3, while surface water samples collected during autumn recorded 0.39 MP.m-3. Water column samples from the spring campaign had an average of 1.46 MP.m-3, while those from the autumn recorded 2.54 MP.m-3. In the spring, all microplastics found were fibers, predominantly black and blue. In autumn, the dominant particles found in the surface waters were fibers, while in the water column, fragments were dominant. In spring, the average size of surface water particles was 888 μm, while in the water column was 1063 μm. In autumn, the average size of surface and water column microplastics was 1333 μm and 1393 μm, respectively. The main polymers identified by Attenuated Total Reflectance (ATR) and micro-ATR Fourier Transform Infrared (FTIR) spectroscopy from all samples were low-density polyethylene (LDPE), polypropylene (PP), polyethylene terephthalate (PET), and polyvinyl chloride (PVC). The significant difference between the microplastic concentration in the water column between the two campaigns could be due to the remixing of the water masses that occurred that week due to the occurrence of a storm. This work presents preliminary results since the i-Plastic project is still in progress. These results will contribute to the understanding of the spatial and temporal dispersion and accumulation of microplastics in this marine environment.

Keywords: microplastics, Portugal, Atlantic Ocean, water column, surface water

Procedia PDF Downloads 80
252 Species Profiling of Scarab Beetles with the Help of Light Trap in Western Himalayan Region of Uttarakhand

Authors: Ajay Kumar Pandey

Abstract:

White grub (Coleoptera: Scarabaeidae), locally known as Kurmula, Pagra, Chinchu, is a major destructive pest in western Himalayan region of Uttarakhand state of India. Various crops like cereals (up land paddy, wheat, and barley), vegetables (capsicum, cabbage, tomato, cauliflower, carrot etc) and some pulse (like pigeon pea, green gram, black gram) are grown with limited availability of primary resources. Among the various limitations in successful cultivation of these crops, white grub has been proved a major constraint in for all crops grown in hilly area. The losses incurred due to white grubs are huge in case of commercial crops like sugarcane, groundnut, potato, maize and upland rice. Moreover, it has been proved major constraint in potato production in mid and higher hills of India. Adults emerge in May-June following the onset of monsoon and thereafter defoliate the apple, apricot, plum, and walnut during night while 2nd and 3rd instar grubs feed on live roots of cultivated as well as non cultivated crops from August to January. Survey was conducted in hilly (Pauri and Tehri) as well as plain area (Haridwar district) of Uttarakhand state. Collection of beetle was done from various locations from August to September of five consecutive years with the help of light trap and directly from host plant. The grub was also collected by excavating one square meter area from different locations and reared in laboratory to find out adult. During the collection, the diseased or dead cadaver were also collected and brought in the laboratory and identified the causal organisms. Total 25 species of white grub was identified out of which Holotrichia longipennis, Anomala dimidiata, Holotrichia lineatopennis, Maladera insanabilis, Brahmina sp. make complex problem in different area of Uttarakhand where they cause severe damage to various crops. During the survey, it was observed that white grubs beetles have variation in preference of host plant, even in choice of fruit and leaves of host plant. It was observed that, a white grub species, which identified as Lepidiota mansueta Burmeister., was causing severe havoc to sugarcane crop grown in major sugarcane growing belt of Haridwar district. The study also revealed that Bacillus cereus, Beauveria bassiana, Metarhizium anisopliae, Steinernema, Heterorhabditis are major disease causing agents in immature stage of white grub under rain-fed condition of Uttarakhand which caused 15.55 to 21.63 percent natural mortality of grubs with an average of 18.91 percent. However, among the microorganisms, B. cereus found to be significantly more efficient (7.03 percent mortality) then the entomopathogenic fungi (3.80 percent mortality) and nematodes (3.20 percent mortality).

Keywords: Lepidiota, profiling, Uttarakhand, whitegrub

Procedia PDF Downloads 220
251 Strength Properties of Ca-Based Alkali Activated Fly Ash System

Authors: Jung-Il Suh, Hong-Gun Park, Jae-Eun Oh

Abstract:

Recently, the use of long-span precast concrete (PC) construction has increased in modular construction such as storage buildings and parking facilities. When applying long span PC member, reducing weight of long span PC member should be conducted considering lifting capacity of crane and self-weight of PC member and use of structural lightweight concrete made by lightweight aggregate (LWA) can be considered. In the process of lightweight concrete production, segregation and bleeding could occur due to difference of specific gravity between cement (3.3) and lightweight aggregate (1.2~1.8) and reducing weight of binder is needed to prevent the segregation between binder and aggregate. Also, lightweight precast concrete made by cementitious materials such as fly ash and ground granulated blast furnace (GGBFS) which is lower than specific gravity of cement as a substitute for cement has been studied. When only using fly ash for cementless binder alkali-activation of fly ash is most important chemical process in which the original fly ash is dissolved by a strong alkaline medium in steam curing with high-temperature condition. Because curing condition is similar with environment of precast member production, additional process is not needed. Na-based chloride generally used as a strong alkali activator has a practical problem such as high pH toxicity and high manufacturing cost. Instead of Na-based alkali activator calcium hydroxide [Ca(OH)2] and sodium hydroxide [Na2CO3] might be used because it has a lower pH and less expensive than Na-based alkali activator. This study explored the influences on Ca(OH)2-Na2CO3-activated fly ash system in its microstructural aspects and strength and permeability using powder X-ray analysis (XRD), thermogravimetry (TGA), mercury intrusion porosimetry (MIP). On the basis of microstructural analysis, the conclusions are made as follows. Increase of Ca(OH)2/FA wt.% did not affect improvement of compressive strength. Also, Ca(OH)2/FA wt.% and Na2CO3/FA wt.% had little effect on specific gravity of saturated surface dry (SSD) and absolute dry (AD) condition to calculate water absorption. Especially, the binder is appropriate for structural lightweight concrete because specific gravity of the hardened paste has no difference with that of lightweight aggregate. The XRD and TGA/DTG results did not present considerable difference for the types and quantities of hydration products depending on w/b ratio, Ca(OH)2 wt.%, and Na2CO3 wt.%. In the case of higher molar quantity of Ca(OH)2 to Na2CO3, XRD peak indicated unreacted Ca(OH)2 while DTG peak was not presented because of small quantity. Thus, presence of unreacted Ca(OH)2 is too small quantity to effect on mechanical performance. As a result of MIP, the porosity volume related to capillary pore depends on the w/b ratio. In the same condition of w/b ratio, quantities of Ca(OH)2 and Na2CO3 have more influence on pore size distribution rather than total porosity. While average pore size decreased as Na2CO3/FA w.t% increased, the average pore size increased over 20 nm as Ca(OH)2/FA wt.% increased which has inverse proportional relationship between pore size and mechanical properties such as compressive strength and water permeability.

Keywords: Ca(OH)2, compressive strength, microstructure, fly ash, Na2CO3, water absorption

Procedia PDF Downloads 225
250 Probing Scientific Literature Metadata in Search for Climate Services in African Cities

Authors: Zohra Mhedhbi, Meheret Gaston, Sinda Haoues-Jouve, Julia Hidalgo, Pierre Mazzega

Abstract:

In the current context of climate change, supporting national and local stakeholders to make climate-smart decisions is necessary but still underdeveloped in many countries. To overcome this problem, the Global Frameworks for Climate Services (GFCS), implemented under the aegis of the United Nations in 2012, has initiated many programs in different countries. The GFCS contributes to the development of Climate Services, an instrument based on the production and transfer of scientific climate knowledge for specific users such as citizens, urban planning actors, or agricultural professionals. As cities concentrate on economic, social and environmental issues that make them more vulnerable to climate change, the New Urban Agenda (NUA), adopted at Habitat III in October 2016, highlights the importance of paying particular attention to disaster risk management, climate and environmental sustainability and urban resilience. In order to support the implementation of the NUA, the World Meteorological Organization (WMO) has identified the urban dimension as one of its priorities and has proposed a new tool, the Integrated Urban Services (IUS), for more sustainable and resilient cities. In the southern countries, there’s a lack of development of climate services, which can be partially explained by problems related to their economic financing. In addition, it is often difficult to make climate change a priority in urban planning, given the more traditional urban challenges these countries face, such as massive poverty, high population growth, etc. Climate services and Integrated Urban Services, particularly in African cities, are expected to contribute to the sustainable development of cities. These tools will help promoting the acquisition of meteorological and socio-ecological data on their transformations, encouraging coordination between national or local institutions providing various sectoral urban services, and should contribute to the achievement of the objectives defined by the United Nations Framework Convention on Climate Change (UNFCCC) or the Paris Agreement, and the Sustainable Development Goals. To assess the state of the art on these various points, the Web of Science metadatabase is queried. With a query combining the keywords "climate*" and "urban*", more than 24,000 articles are identified, source of more than 40,000 distinct keywords (but including synonyms and acronyms) which finely mesh the conceptual field of research. The occurrence of one or more names of the 514 African cities of more than 100,000 inhabitants or countries, reduces this base to a smaller corpus of about 1410 articles (2990 keywords). 41 countries and 136 African cities are cited. The lexicometric analysis of the metadata of the articles and the analysis of the structural indicators (various centralities) of the networks induced by the co-occurrence of expressions related more specifically to climate services show the development potential of these services, identify the gaps which remain to be filled for their implementation and allow to compare the diversity of national and regional situations with regard to these services.

Keywords: African cities, climate change, climate services, integrated urban services, lexicometry, networks, urban planning, web of science

Procedia PDF Downloads 195
249 Phytochemical and Vitamin Composition of Wild Edible Plants Consumed in South West Ethiopia

Authors: Abebe Yimer, Sirawdink Fikereyesus Forsido, Getachew Addis, Abebe Ayelign

Abstract:

Background: Oxidative stress has been an important health problem as itinduceschronic diseases such as cancer, cardiovascular, diabetics, and neurodegenerative disease. Plant source natural antioxidant has gained attention as synthetic antioxidant negatively impact human health. Wild edible plants arecheap source of dietary-medicine in mainly rural communityin south-west Ethiopia and elsewhere the country. Thus, the study aimed to determine total pheneol,flavoinoids, antioxidant, vitamin C, and beta-carotene content from wild edible plants Solanum nigrum L., Vigna membranacea A. Rich, Dioscorea praehensilis Benth., Trilepisium madagascariense D.C.andCleome gynandra L. Methods: Methanol was used to extract samples of oven-dried edible plants. Total phenolic compound (TPC) was determined using a Folin Ciocalteu method, whereas total flavonoid content (TFC) was determined using the Aluminium chloride colorimetric method. By using 2, 2-diphenyl-1-picrylhydrazyl (DPPH) and ferric reducing antioxidant power (FRAP) tests, antioxidant activities were evaluated in vitro. Additionally, beta-carotene was assessed using a spectrophotometric technique, whilst vitamin C was determined using a titration approach. Results: Total flavonoid contentranged from 0.85±0.03 to 11.25±0.01 mg CE/g in D. praehensilis Benth. tuber and C. gynandra L, respectively. Total phenolic compounds varied from 0.25±0.06 GAE/g in D. praehensilis Benth tuber to 35.73±2.52 GAE/g in S.nigrum L. leaves. In the DPPH test, the highest antioxidant value (87.65%) was obtained in the S.nigrum L. leaves, whereas the smallest amount of antioxidant (50.12%)was contained in D. praehensilis Benth tuber. Similarly in FRAP assay,D. praehensilis Benth tuber showed the least reducing potential(49.16± 2.13mM Fe2+/100 g)whilst the highest reducing potential was presented in the S.nigrum L. leaves(188.12±1.13 mM Fe2+/100 g). The beta-carotene content was found between 11.81±0.00 mg/100g in D. praehensilis Benth tubers to 34.49±0.95 mg/100g in V. membranacea A. Rich leaves. The concentration of vitamin C ranged from 10.00±0.61 in D. praehensilis Benth tubers to 45±1.80 mg/100g in V. membranacea A. Rich leaves. The results showed that high positive linear correlations between TPC and TFC of WEPs (r=0.828), as well as between FRAP and total phenolic contents (r = 0.943) and FRAP and vitamin C (r= 0.928). Conclusion: These findings showed the total phenolic and flavonoid contents of Solanum nigrum L. and Cleome gynandra L, respectively, are abundant. The outcome may be used as a natural supply of dietary antioxidants, which may be useful in preventing oxidative stress. The study's findings also showed that Vigna membranacea A. Rich leaves were cheap source of vitamin C and beta-carotene for people who consumed these wild green. Additional research on the in vivo antioxidant activity, toxicological analysis, and promotion of these wild food plants for agricultural production should be taken into consideration.

Keywords: antioxidant activity, beta-carotene, flavonoids, phenolic content, and vitamin c

Procedia PDF Downloads 102
248 Radiation Stability of Structural Steel in the Presence of Hydrogen

Authors: E. A. Krasikov

Abstract:

As the service life of an operating nuclear power plant (NPP) increases, the potential misunderstanding of the degradation of aging components must receive more attention. Integrity assurance analysis contributes to the effective maintenance of adequate plant safety margins. In essence, the reactor pressure vessel (RPV) is the key structural component determining the NPP lifetime. Environmentally induced cracking in the stainless steel corrosion-preventing cladding of RPV’s has been recognized to be one of the technical problems in the maintenance and development of light-water reactors. Extensive cracking leading to failure of the cladding was found after 13000 net hours of operation in JPDR (Japan Power Demonstration Reactor). Some of the cracks have reached the base metal and further penetrated into the RPV in the form of localized corrosion. Failures of reactor internal components in both boiling water reactors and pressurized water reactors have increased after the accumulation of relatively high neutron fluences (5´1020 cm–2, E>0,5MeV). Therefore, in the case of cladding failure, the problem arises of hydrogen (as a corrosion product) embrittlement of irradiated RPV steel because of exposure to the coolant. At present when notable progress in plasma physics has been obtained practical energy utilization from fusion reactors (FR) is determined by the state of material science problems. The last includes not only the routine problems of nuclear engineering but also a number of entirely new problems connected with extreme conditions of materials operation – irradiation environment, hydrogenation, thermocycling, etc. Limiting data suggest that the combined effect of these factors is more severe than any one of them alone. To clarify the possible influence of the in-service synergistic phenomena on the FR structural materials properties we have studied hydrogen-irradiated steel interaction including alternating hydrogenation and heat treatment (annealing). Available information indicates that the life of the first wall could be expanded by means of periodic in-place annealing. The effects of neutron fluence and irradiation temperature on steel/hydrogen interactions (adsorption, desorption, diffusion, mechanical properties at different loading velocities, post-irradiation annealing) were studied. Experiments clearly reveal that the higher the neutron fluence and the lower the irradiation temperature, the more hydrogen-radiation defects occur, with corresponding effects on the steel mechanical properties. Hydrogen accumulation analyses and thermal desorption investigations were performed to prove the evidence of hydrogen trapping at irradiation defects. Extremely high susceptibility to hydrogen embrittlement was observed with specimens which had been irradiated at relatively low temperature. However, the susceptibility decreases with increasing irradiation temperature. To evaluate methods for the RPV’s residual lifetime evaluation and prediction, more work should be done on the irradiated metal–hydrogen interaction in order to monitor more reliably the status of irradiated materials.

Keywords: hydrogen, radiation, stability, structural steel

Procedia PDF Downloads 270
247 Cognitive Radio in Aeronautic: Comparison of Some Spectrum Sensing Technics

Authors: Abdelkhalek Bouchikhi, Elyes Benmokhtar, Sebastien Saletzki

Abstract:

The aeronautical field is experiencing issues with RF spectrum congestion due to the constant increase in the number of flights, aircrafts and telecom systems on board. In addition, these systems are bulky in size, weight and energy consumption. The cognitive radio helps particularly solving the spectrum congestion issue by its capacity to detect idle frequency channels then, allowing an opportunistic exploitation of the RF spectrum. The present work aims to propose a new use case for aeronautical spectrum sharing and to study the performances of three different detection techniques: energy detector, matched filter and cyclostationary detector within the aeronautical use case. The spectrum in the proposed cognitive radio is allocated dynamically where each cognitive radio follows a cognitive cycle. The spectrum sensing is a crucial step. The goal of the sensing is gathering data about the surrounding environment. Cognitive radio can use different sensors: antennas, cameras, accelerometer, thermometer, etc. In IEEE 802.22 standard, for example, a primary user (PU) has always the priority to communicate. When a frequency channel witch used by the primary user is idle, the secondary user (SU) is allowed to transmit in this channel. The Distance Measuring Equipment (DME) is composed of a UHF transmitter/receiver (interrogator) in the aircraft and a UHF receiver/transmitter on the ground. While the future cognitive radio will be used jointly to alleviate the spectrum congestion issue in the aeronautical field. LDACS, for example, is a good candidate; it provides two isolated data-links: ground-to-air and air-to-ground data-links. The first contribution of the present work is a strategy allowing sharing the L-band. The adopted spectrum sharing strategy is as follow: the DME will play the role of PU which is the licensed user and the LDACS1 systems will be the SUs. The SUs could use the L-band channels opportunely as long as they do not causing harmful interference signals which affect the QoS of the DME system. Although the spectrum sensing is a key step, it helps detecting holes by determining whether the primary signal is present or not in a given frequency channel. A missing detection on primary user presence creates interference between PU and SU and will affect seriously the QoS of the legacy radio. In this study, first brief definitions, concepts and the state of the art of cognitive radio will be presented. Then, a study of three communication channel detection algorithms in a cognitive radio context is carried out. The study is made from the point of view of functions, material requirements and signal detection capability in the aeronautical field. Then, we presented a modeling of the detection problem by three different methods (energy, adapted filter, and cyclostationary) as well as an algorithmic description of these detectors is done. Then, we study and compare the performance of the algorithms. Simulations were carried out using MATLAB software. We analyzed the results based on ROCs curves for SNR between -10dB and 20dB. The three detectors have been tested with a synthetics and real world signals.

Keywords: aeronautic, communication, navigation, surveillance systems, cognitive radio, spectrum sensing, software defined radio

Procedia PDF Downloads 174
246 Conceptualizing a Biomimetic Fablab Based on the Makerspace Concept and Biomimetics Design Research

Authors: Petra Gruber, Ariana Rupp, Peter Niewiarowski

Abstract:

This paper presents a concept for a biomimetic fablab as a physical space for education, research and development of innovation inspired by nature. Biomimetics as a discipline finds increasing recognition in academia and has started to be institutionalized at universities in programs and centers. The Biomimicry Research and Innovation Center was founded in 2012 at the University of Akron as an interdisciplinary venture for the advancement of innovation inspired by nature and is part of a larger community fostering the approach of bioimimicry in the Great Lakes region of the US. With 30 faculty members the center has representatives from Colleges of Arts and Sciences (e.g., biology, chemistry, geoscience, and philosophy) Engineering (e.g., mechanical, civil, and biomedical), Polymer Science, and Myers School of Arts. A platform for training PhDs in Biomimicry (17 students currently enrolled) is co-funded by educational institutions and industry partners. Research at the center touches on many areas but is also currently biased towards materials and structures, with highlights being materials based on principles found in spider silk and gecko attachment mechanisms. As biomimetics is also a novel scientific discipline, there is little standardisation in programming and the equipment of research facilities. As a field targeting innovation, design and prototyping processes are fundamental parts of the developments. For experimental design and prototyping, MIT's maker space concept seems to fit well to the requirements, but facilities need to be more specialised in terms of accessing biological systems and knowledge, specific research, production or conservation requirements. For the education and research facility BRIC we conceptualize the concept of a biomimicry fablab, that ties into the existing maker space concept and creates the setting for interdisciplinary research and development carried out in the program. The concept takes on the process of biomimetics as a guideline to define core activities that shall be enhanced by the allocation of specific spaces and tools. The limitations of such a facility and the intersections to further specialised labs housed in the classical departments are of special interest. As a preliminary proof of concept two biomimetic design courses carried out in 2016 are investigated in terms of needed tools and infrastructure. The spring course was a problem based biomimetic design challenge in collaboration with an innovation company interested in product design for assisted living and medical devices. The fall course was a solution based biomimetic design course focusing on order and hierarchy in nature with the goal of finding meaningful translations into art and technology. The paper describes the background of the BRIC center, identifies and discusses the process of biomimetics, evaluates the classical maker space concept and explores how these elements can shape the proposed research facility of a biomimetic fablab by examining two examples of design courses held in 2016.

Keywords: biomimetics, biomimicry, design, biomimetic fablab

Procedia PDF Downloads 294
245 Child Sexual Abuse Prevention: Evaluation of the Program “Sharing Mouth to Mouth: My Body, Nobody Can Touch It”

Authors: Faride Peña, Teresita Castillo, Concepción Campo

Abstract:

Sexual violence, and particularly child sexual abuse, is a serious problem all over the world, México included. Given its importance, there are several preventive and care programs done by the government and the civil society all over the country but most of them are developed in urban areas even though these problems are especially serious in rural areas. Yucatán, a state in southern México, occupies one of the first places in child sexual abuse. Considering the above, the University Unit of Clinical Research and Victimological Attention (UNIVICT) of the Autonomous University of Yucatan, designed, implemented and is currently evaluating the program named “Sharing Mouth to Mouth: My Body, Nobody Can Touch It”, a program to prevent child sexual abuse in rural communities of Yucatán, México. Its aim was to develop skills for the detection of risk situations, providing protection strategies and mechanisms for prevention through culturally relevant psycho-educative strategies to increase personal resources in children, in collaboration with parents, teachers, police and municipal authorities. The diagnosis identified that a particularly vulnerable population were children between 4 and 10 years. The program run during 2015 in primary schools in the municipality whose inhabitants are mostly Mayan. The aim of this paper is to present its evaluation in terms of its effectiveness and efficiency. This evaluation included documental analysis of the work done in the field, psycho-educational and recreational activities with children, evaluation of knowledge by participating children and interviews with parents and teachers. The results show high efficiency in fulfilling the tasks and achieving primary objectives. The efficiency shows satisfactory results but also opportunity areas that can be resolved with minor adjustments to the program. The results also show the importance of including culturally relevant strategies and activities otherwise it minimizes possible achievements. Another highlight is the importance of participatory action research in preventive approaches to child sexual abuse since by becoming aware of the importance of the subject people participate more actively; in addition to design culturally appropriate strategies and measures so that the proposal may not be distant to the people. Discussion emphasizes the methodological implications of prevention programs (convenience of using participatory action research (PAR), importance of monitoring and mediation during implementation, developing detection skills tools in creative ways using psycho-educational interactive techniques and working assessment issued by the participants themselves). As well, it is important to consider the holistic character this type of program should have, in terms of incorporating social and culturally relevant characteristics, according to the community individuality and uniqueness, consider type of communication to be used and children’ language skills considering that there should be variations strongly linked to a specific cultural context.

Keywords: child sexual abuse, evaluation, PAR, prevention

Procedia PDF Downloads 295
244 Consumers Attitude toward the Latest Trends in Decreasing Energy Consumption of Washing Machine

Authors: Farnaz Alborzi, Angelika Schmitz, Rainer Stamminger

Abstract:

Reducing water temperatures in the wash phase of a washing programme and increasing the overall cycle durations are the latest trends in decreasing energy consumption of washing programmes. Since the implementation of the new energy efficiency classes in 2010, manufacturers seem to apply the aforementioned washing strategy with lower temperatures combined with longer programme durations extensively to realise energy-savings needed to meet the requirements of the highest energy efficiency class possible. A semi-representative on-line survey in eleven European countries (Czech Republic, Finland, France, Germany, Hungary, Italy, Poland, Romania, Spain, Sweden and the United Kingdom) was conducted by Bonn University in 2015 to shed light on consumer opinion and behaviour regarding the effects of the lower washing temperature and longer cycle duration in laundry washing on consumers’ acceptance of the programme. The risk of the long wash cycle is that consumers might not use the energy efficient Standard programmes and will think of this option as inconvenient and therefore switch to shorter, but more energy consuming programmes. Furthermore, washing in a lower temperature may lead to the problem of cross-contamination. Washing behaviour of over 5,000 households was studied in this survey to provide support and guidance for manufacturers and policy designers. Qualified households were chosen following a predefined quota: -Involvement in laundry washing: substantial, -Distribution of gender: more than 50 % female , -Selected age groups: -20–39 years, -40–59 years, -60–74 years, -Household size: 1, 2, 3, 4 and more than 4 people. Furthermore, Eurostat data for each country were used to calculate the population distribution in the respective age class and household size as quotas for the consumer survey distribution in each country. Before starting the analyses, the validity of each dataset was controlled with the aid of control questions. After excluding the outlier data, the number of the panel diminished from 5,100 to 4,843. The primary outcome of the study is European consumers are willing to save water and energy in a laundry washing but reluctant to use long programme cycles since they don’t believe that the long cycles could be energy-saving. However, the results of our survey don’t confirm that there is a relation between frequency of using Standard cotton (Eco) or Energy-saving programmes and the duration of the programmes. It might be explained by the fact that the majority of washing programmes used by consumers do not take so long, perhaps consumers just choose some additional time reduction option when selecting those programmes and this finding might be changed if the Energy-saving programmes take longer. Therefore, it may be assumed that introducing the programme duration as a new measure on a revised energy label would strongly influence the consumer at the point of sale. Furthermore, results of the survey confirm that consumers are more willing to use lower temperature programmes in order to save energy than accepting longer programme cycles and majority of them accept deviation from the nominal temperature of the programme as long as the results are good.

Keywords: duration, energy-saving, standard programmes, washing temperature

Procedia PDF Downloads 221
243 Development of an Automatic Control System for ex vivo Heart Perfusion

Authors: Pengzhou Lu, Liming Xin, Payam Tavakoli, Zhonghua Lin, Roberto V. P. Ribeiro, Mitesh V. Badiwala

Abstract:

Ex vivo Heart Perfusion (EVHP) has been developed as an alternative strategy to expand cardiac donation by enabling resuscitation and functional assessment of hearts donated from marginal donors, which were previously not accepted. EVHP parameters, such as perfusion flow (PF) and perfusion pressure (PP) are crucial for optimal organ preservation. However, with the heart’s constant physiological changes during EVHP, such as coronary vascular resistance, manual control of these parameters is rendered imprecise and cumbersome for the operator. Additionally, low control precision and the long adjusting time may lead to irreversible damage to the myocardial tissue. To solve this problem, an automatic heart perfusion system was developed by applying a Human-Machine Interface (HMI) and a Programmable-Logic-Controller (PLC)-based circuit to control PF and PP. The PLC-based control system collects the data of PF and PP through flow probes and pressure transducers. It has two control modes: the RPM-flow mode and the pressure mode. The RPM-flow control mode is an open-loop system. It influences PF through providing and maintaining the desired speed inputted through the HMI to the centrifugal pump with a maximum error of 20 rpm. The pressure control mode is a closed-loop system where the operator selects a target Mean Arterial Pressure (MAP) to control PP. The inputs of the pressure control mode are the target MAP, received through the HMI, and the real MAP, received from the pressure transducer. A PID algorithm is applied to maintain the real MAP at the target value with a maximum error of 1mmHg. The precision and control speed of the RPM-flow control mode were examined by comparing the PLC-based system to an experienced operator (EO) across seven RPM adjustment ranges (500, 1000, 2000 and random RPM changes; 8 trials per range) tested in a random order. System’s PID algorithm performance in pressure control was assessed during 10 EVHP experiments using porcine hearts. Precision was examined through monitoring the steady-state pressure error throughout perfusion period, and stabilizing speed was tested by performing two MAP adjustment changes (4 trials per change) of 15 and 20mmHg. A total of 56 trials were performed to validate the RPM-flow control mode. Overall, the PLC-based system demonstrated the significantly faster speed than the EO in all trials (PLC 1.21±0.03, EO 3.69±0.23 seconds; p < 0.001) and greater precision to reach the desired RPM (PLC 10±0.7, EO 33±2.7 mean RPM error; p < 0.001). Regarding pressure control, the PLC-based system has the median precision of ±1mmHg error and the median stabilizing times in changing 15 and 20mmHg of MAP are 15 and 19.5 seconds respectively. The novel PLC-based control system was 3 times faster with 60% less error than the EO for RPM-flow control. In pressure control mode, it demonstrates a high precision and fast stabilizing speed. In summary, this novel system successfully controlled perfusion flow and pressure with high precision, stability and a fast response time through a user-friendly interface. This design may provide a viable technique for future development of novel heart preservation and assessment strategies during EVHP.

Keywords: automatic control system, biomedical engineering, ex-vivo heart perfusion, human-machine interface, programmable logic controller

Procedia PDF Downloads 175
242 In vitro Evaluation of Immunogenic Properties of Oral Application of Rabies Virus Surface Glycoprotein Antigen Conjugated to Beta-Glucan Nanoparticles in a Mouse Model

Authors: Narges Bahmanyar, Masoud Ghorbani

Abstract:

Rabies is caused by several species of the genus Lyssavirus in the Rhabdoviridae family. The disease is deadly encephalitis transmitted from warm-blooded animals to humans, and domestic and wild carnivores play the most crucial role in its transmission. The prevalence of rabies in poor areas of developing salinities is constantly posed as a global threat to public health. According to the World Health Organization, approximately 60,000 people die yearly from rabies. Of these, 60% of deaths are related to the Middle East. Although rabies encephalitis is incurable to date, awareness of the disease and the use of vaccines is the best way to combat the disease. Although effective vaccines are available, there is a high cost involved in vaccine production and management to combat rabies. Increasing the prevalence and discovery of new strains of rabies virus requires the need for safe, effective, and as inexpensive vaccines as possible. One of the approaches considered to achieve the quality and quantity expressed through the manufacture of recombinant types of rabies vaccine. Currently, livestock rabies vaccines are used only in inactivated or live attenuated vaccines, the process of inactivation of which pays attention to considerations. The rabies virus contains a negatively polarized single-stranded RNA genome that encodes the five major structural genes (N, P, M, G, L) from '3 to '5 . Rabies virus glycoprotein G, the major antigen, can produce the virus-neutralizing antibody. N-antigen is another candidate for developing recombinant vaccines. However, because it is within the RNP complex of the virus, the possibility of genetic diversity based on different geographical locations is very high. Glycoprotein G is structurally and antigenically more protected than other genes. Protection at the level of its nucleotide sequence is about 90% and at the amino acid level is 96%. Recombinant vaccines, consisting of a pathogenic subunit, contain fragments of the protein or polysaccharide of the pathogen that have been carefully studied to determine which of these molecules elicits a stronger and more effective immune response. These vaccines minimize the risk of side effects by limiting the immune system's access to the pathogen. Such vaccines are relatively inexpensive, easy to produce, and more stable than vaccines containing viruses or whole bacteria. The problem with these vaccines is that the pathogenic subunits may elicit a weak immune response in the body or may be destroyed before they reach the immune cells, which requires nanoparticles to overcome. Suitable for use as an adjuvant. Among these, biodegradable nanoparticles with functional levels are good candidates as adjuvants for the vaccine. In this study, we intend to use beta-glucan nanoparticles as adjuvants. The surface glycoprotein of the rabies virus (G) is responsible for identifying and binding the virus to the target cell. This glycoprotein is the major protein in the structure of the virus and induces an antibody response in the host. In this study, we intend to use rabies virus surface glycoprotein conjugated with beta-glucan nanoparticles to produce vaccines.

Keywords: rabies, vaccines, beta glucan, nanoprticles, adjuvant, recombinant protein

Procedia PDF Downloads 17
241 Comparative Evaluation of High Pure Mn3O4 Preparation Technique between the Conventional Process from Electrolytic Manganese and a Sustainable Approach Directly from Low-Grade Rhodochrosite

Authors: Fang Lian, Zefang Chenli, Laijun Ma, Lei Mao

Abstract:

Up to now, electrolytic process is a popular way to prepare Mn and MnO2 (EMD) with high purity. However, the conventional preparation process of manganese oxide such as Mn3O4 with high purity from electrolytic manganese metal is characterized by long production-cycle, high-pollution discharge and high energy consumption especially initially from low-grade rhodochrosite, the main resources for exploitation and applications in China. Moreover, Mn3O4 prepared from electrolytic manganese shows large particles, single morphology beyond the control and weak chemical activity. On the other hand, hydrometallurgical method combined with thermal decomposition, hydrothermal synthesis and sol-gel processes has been widely studied because of its high efficiency, low consumption and low cost. But the key problem in direct preparation of manganese oxide series from low-grade rhodochrosite is to remove completely the multiple impurities such as iron, silicon, calcium and magnesium. It is urgent to develop a sustainable approach to high pure manganese oxide series with character of short process, high efficiency, environmentally friendly and economical benefit. In our work, the preparation technique of high pure Mn3O4 directly from low-grade rhodochrosite ore (13.86%) was studied and improved intensively, including the effective leaching process and the short purifying process. Based on the same ion effect, the repeated leaching of rhodochrosite with sulfuric acid is proposed to improve the solubility of Mn2+ and inhibit the dissolution of the impurities Ca2+ and Mg2+. Moreover, the repeated leaching process could make full use of sulfuric acid and lower the cost of the raw material. With the aid of theoretical calculation, Ba(OH)2 was chosen to adjust the pH value of manganese sulfate solution and BaF2 to remove Ca2+ and Mg2+ completely in the process of purifying. Herein, the recovery ratio of manganese and removal ratio of the impurity were evaluated via chemical titration and ICP analysis, respectively. Comparison between conventional preparation technique from electrolytic manganese and a sustainable approach directly from low-grade rhodochrosite have also been done herein. The results demonstrate that the extraction ratio and the recovery ratio of manganese reached 94.3% and 92.7%, respectively. The heavy metal impurities has been decreased to less than 1ppm, and the content of calcium, magnesium and sodium has been decreased to less than 20ppm, which meet standards of high pure reagent for energy and electronic materials. In compare with conventional technique from electrolytic manganese, the power consumption has been reduced to ≤2000 kWh/t(product) in our short-process approach. Moreover, comprehensive recovery rate of manganese increases significantly, and the wastewater generated from our short-process approach contains low content of ammonia/ nitrogen about 500 mg/t(product) and no toxic emissions. Our study contributes to the sustainable application of low-grade manganese ore. Acknowledgements: The authors are grateful to the National Science and Technology Support Program of China (No.2015BAB01B02) for financial support to the work.

Keywords: leaching, high purity, low-grade rhodochrosite, manganese oxide, purifying process, recovery ratio

Procedia PDF Downloads 248
240 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 136
239 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators

Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy

Abstract:

Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.

Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators

Procedia PDF Downloads 113
238 Laboratory Indices in Late Childhood Obesity: The Importance of DONMA Indices

Authors: Orkide Donma, Mustafa M. Donma, Muhammet Demirkol, Murat Aydin, Tuba Gokkus, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu

Abstract:

Obesity in childhood establishes a ground for adulthood obesity. Especially morbid obesity is an important problem for the children because of the associated diseases such as diabetes mellitus, cancer and cardiovascular diseases. In this study, body mass index (BMI), body fat ratios, anthropometric measurements and ratios were evaluated together with different laboratory indices upon evaluation of obesity in morbidly obese (MO) children. Children with nutritional problems participated in the study. Written informed consent was obtained from the parents. Study protocol was approved by the Ethics Committee. Sixty-two MO girls aged 129.5±35.8 months and 75 MO boys aged 120.1±26.6 months were included into the scope of the study. WHO-BMI percentiles for age-and-sex were used to assess the children with those higher than 99th as morbid obesity. Anthropometric measurements of the children were recorded after their physical examination. Bio-electrical impedance analysis was performed to measure fat distribution. Anthropometric ratios, body fat ratios, Index-I and Index-II as well as insulin sensitivity indices (ISIs) were calculated. Girls as well as boys were binary grouped according to homeostasis model assessment-insulin resistance (HOMA-IR) index of <2.5 and >2.5, fasting glucose to insulin ratio (FGIR) of <6 and >6 and quantitative insulin sensitivity check index (QUICKI) of <0.33 and >0.33 as the frequently used cut-off points. They were evaluated based upon their BMIs, arms, legs, trunk, whole body fat percentages, body fat ratios such as fat mass index (FMI), trunk-to-appendicular fat ratio (TAFR), whole body fat ratio (WBFR), anthropometric measures and ratios [waist-to-hip, head-to-neck, thigh-to-arm, thigh-to-ankle, height/2-to-waist, height/2-to-hip circumference (C)]. SPSS/PASW 18 program was used for statistical analyses. p≤0.05 was accepted as statistically significance level. All of the fat percentages showed differences between below and above the specified cut-off points in girls when evaluated with HOMA-IR and QUICKI. Differences were observed only in arms fat percent for HOMA-IR and legs fat percent for QUICKI in boys (p≤ 0.05). FGIR was unable to detect any differences for the fat percentages of boys. Head-to-neck C was the only anthropometric ratio recommended to be used for all ISIs (p≤0.001 for both girls and boys in HOMA-IR, p≤0.001 for girls and p≤0.05 for boys in FGIR and QUICKI). Indices which are recommended for use in both genders were Index-I, Index-II, HOMA/BMI and log HOMA (p≤0.001). FMI was also a valuable index when evaluated with HOMA-IR and QUICKI (p≤0.001). The important point was the detection of the severe significance for HOMA/BMI and log HOMA while they were evaluated also with the other indices, FGIR and QUICKI (p≤0.001). These parameters along with Index-I were unique at this level of significance for all children. In conclusion, well-accepted ratios or indices may not be valid for the evaluation of both genders. This study has emphasized the limiting properties for boys. This is particularly important for the selection process of some ratios and/or indices during the clinical studies. Gender difference should be taken into consideration for the evaluation of the ratios or indices, which will be recommended to be used particularly within the scope of obesity studies.

Keywords: anthropometry, childhood obesity, gender, insulin sensitivity index

Procedia PDF Downloads 356
237 Implementation of a Multidisciplinary Weekly Safety Briefing in a Tertiary Paediatric Cardiothoracic Transplant Unit

Authors: Lauren Dhugga, Meena Parameswaran, David Blundell, Abbas Khushnood

Abstract:

Context: A multidisciplinary weekly safety briefing was implemented at the Paediatric Cardiothoracic Unit at the Freeman Hospital in Newcastle-upon-Tyne. It is a tertiary referral centre with a quarternary cardiac paediatric intensive care unit and provides complexed care including heart and lung transplants, mechanical support and advanced heart failure assessment. Aim: The aim of this briefing is to provide a structured platform of communication, in an effort to improve efficiency, safety, and patient care. Problem: The paediatric cardiothoracic unit is made up of a vast multidisciplinary team including doctors, intensivists, anaesthetists, surgeons, specialist nurses, echocardiogram technicians, physiotherapists, psychologists, dentists, and dietitians. It provides care for children with congenital and acquired cardiac disease and is one of only two units in the UK to offer paediatric heart transplant. The complexity of cases means that there can be many teams involved in providing care to each patient, and frequent movement of children between ward, high dependency, and intensive care areas. Currently, there is no structured forum for communicating important information across the department, for example, staffing shortages, prescribing errors and significant events. Strategy: An initial survey questioning the need for better communication found 90% of respondents agreed that they could think of an incident that had occurred due to ineffective communication, and 85% felt that incident could have been avoided had there been a better form of communication. Lastly, 80% of respondents felt that a weekly 60 second safety briefing would be beneficial to improve communication within our multidisciplinary team. Based on those promising results, a weekly 60 second safety briefing was implemented to be conducted on a Monday morning. The safety briefing covered four key areas (SAFE): staffing, awareness, fix and events. This was to highlight any staffing gaps, any incident reports to be learned from, any issues that required fixing and any events including teachings for the week ahead. The teams were encouraged to email suggestions or issues to be raised for the week or to approach in person with information to add. The safety briefing was implemented using change theory. Effect: The safety briefing has been trialled over 6 weeks and has received a good buy in from staff across specialties. The aim is to embed this safety briefing into a weekly meeting using the PDSA cycle. There will be a second survey in one month to assess the efficacy of the safety briefing and to continue to improve the delivery of information. The project will be presented at the next clinical governance briefing to attract wider feedback and input from across the trust. Lessons: The briefing displays promise as a tool to improve vigilance and communication in a busy multi-disciplinary unit. We have learned about how to implement quality improvement and about the culture of our hospital - how hierarchy influences change. We demonstrate how to implement change through a grassroots process, using a junior led briefing to improve the efficiency, safety, and communication in the workplace.

Keywords: briefing, communication, safety, team

Procedia PDF Downloads 142
236 Optimization of Territorial Spatial Functional Partitioning in Coal Resource-based Cities Based on Ecosystem Service Clusters - The Case of Gujiao City in Shanxi Province

Authors: Gu Sihao

Abstract:

The coordinated development of "ecology-production-life" in cities has been highly concerned by the country, and the transformation development and sustainable development of resource-based cities have become a hot research topic at present. As an important part of China's resource-based cities, coal resource-based cities have the characteristics of large number and wide distribution. However, due to the adjustment of national energy structure and the gradual exhaustion of urban coal resources, the development vitality of coal resource-based cities is gradually reduced. In many studies, the deterioration of ecological environment in coal resource-based cities has become the main problem restricting their urban transformation and sustainable development due to the "emphasis on economy and neglect of ecology". Since the 18th National Congress of the Communist Party of China (CPC), the Central Government has been deepening territorial space planning and development. On the premise of optimizing territorial space development pattern, it has completed the demarcation of ecological protection red lines, carried out ecological zoning and ecosystem evaluation, which have become an important basis and scientific guarantee for ecological modernization and ecological civilization construction. Grasp the regional multiple ecosystem services is the precondition of the ecosystem management, and the relationship between the multiple ecosystem services study, ecosystem services cluster can identify the interactions between multiple ecosystem services, and on the basis of the characteristics of the clusters on regional ecological function zoning, to better Social-Ecological system management. Based on this cognition, this study optimizes the spatial function zoning of Gujiao, a coal resource-based city, in order to provide a new theoretical basis for its sustainable development. This study is based on the detailed analysis of characteristics and utilization of Gujiao city land space, using SOFM neural networks to identify local ecosystem service clusters, according to the cluster scope and function of ecological function zoning of space partition balance and coordination between different ecosystem services strength, establish a relationship between clusters and land use, and adjust the functions of territorial space within each zone. Then, according to the characteristics of coal resources city and national spatial function zoning characteristics, as the driving factors of land change, by cellular automata simulation program, such as simulation under different restoration strategy situation of urban future development trend, and provides relevant theories and technical methods for the "third-line" demarcations of Gujiao's territorial space planning, optimizes territorial space functions, and puts forward targeted strategies for the promotion of regional ecosystem services, providing theoretical support for the improvement of human well-being and sustainable development of resource-based cities.

Keywords: coal resource-based city, territorial spatial planning, ecosystem service cluster, gmop model, geosos-FLUS model, functional zoning optimization and upgrading

Procedia PDF Downloads 61
235 Redefining Doctors' Role in Terms of Medical Errors and Consumer Protection Act to Be in Line with Medical Ethics

Authors: Manushi Srivastava

Abstract:

Introduction: Doctor’s role, and relation with respect to patient care is at the core of medical ethics. The rapid pace of medical advances along with increasing consumer awareness about their rights and hike in cost of effective health care demand a robust, transparent and patient-friendly medical care system. However, doctors’ role performance is still in the frame of activity-passivity model of Doctor-Patient Relationship (DPR) where doctors act as parent and use to instruct their patients, without their consensus that is not going to help in the 21st century. Thus the current situation is a new challenge for traditional doctor-patient relationship after the introduction of Consumer Protection Act (CPA) in medical profession and the same is evidenced by increasing cases of medical litigation. To strengthen this system of medical services, the doctor plays a vital role, and the same should be reviewed in the present context. Objective: To understand the opinion of consultants regarding medical negligence and effect of Consumer Protection Act in terms of current practices of patient care. Method: This is a cross-sectional study in which both quantitative and qualitative methods are applied. Total 69 consultants were selected from multi-specialty hospitals of densely populated Varanasi city catering a population of about 1.8 million. Two-stage sampling was used for selection of respondents. At the first stage, selection of major wards (Medicine, Surgery, Ophthalmology, Gynaecology, Orthopaedics, and Paediatrics) was carried out, which are more susceptible to medical negligence. At the second stage, selection of consultants from the respective wards was carried out. In-depth Interviews were conducted with the help of semi-structured schedule. Two case studies of medical negligence were also carried out as part of the qualitative study. Analysis: Data were analyzed with the help of SPSS software (21.0 trial version). Semi-structured research tool was used to know consultant’s opinion about the pattern of medical negligence cases, litigations and claims made by patient community and inclusion of government medical services in CPA. Statistical analysis was done to describe data, and non-parametric test was used to observe the association between the variables. Analysis of Verbatim was used in case-study. Findings and Conclusion: Majority (92.8%) of consultants felt changes in the behaviour of community (patient) after implementation of CPA, as it had increased awareness about their rights. Less than half of the consultants opined that Medical Negligence is an Unintentional act of doctors and generally occurs due to communication gap and behavioural problem between doctor and patients. Experienced consultants ( > 10 years) pointed out that unethical practice by doctors and mal-intention of patient to harass doctors were additional reasons of Medical Negligence. In-depth interview revealed that now patients’ community expects more transparency and hence they demand cafeteria approach in diagnosis and management of cases. Thus as study results, we propose ‘Agreement Model’ of DPR to re-ensure ethical practice in medical profession.

Keywords: doctors, communication, consumer protection act (CPA), medical error

Procedia PDF Downloads 159
234 Solutions for Food-Safe 3D Printing

Authors: Geremew Geidare Kailo, Igor Gáspár, András Koris, Ivana Pajčin, Flóra Vitális, Vanja Vlajkov

Abstract:

Three-dimension (3D) printing, a very popular additive manufacturing technology, has recently undergone rapid growth and replaced the use of conventional technology from prototyping to producing end-user parts and products. The 3D Printing technology involves a digital manufacturing machine that produces three-dimensional objects according to designs created by the user via 3D modeling or computer-aided design/manufacturing (CAD/CAM) software. The most popular 3D printing system is Fused Deposition Modeling (FDM) or also called Fused Filament Fabrication (FFF). A 3D-printed object is considered food safe if it can have direct contact with the food without any toxic effects, even after cleaning, storing, and reusing the object. This work analyzes the processing timeline of the filament (material for 3D printing) from unboxing to the extrusion through the nozzle. It is an important task to analyze the growth of bacteria on the 3D printed surface and in gaps between the layers. By default, the 3D-printed object is not food safe after longer usage and direct contact with food (even though they use food-safe filaments), but there are solutions for this problem. The aim of this work was to evaluate the 3D-printed object from different perspectives of food safety. Firstly, testing antimicrobial 3D printing filaments from a food safety aspect since the 3D Printed object in the food industry may have direct contact with the food. Therefore, the main purpose of the work is to reduce the microbial load on the surface of a 3D-printed part. Coating with epoxy resin was investigated, too, to see its effect on mechanical strength, thermal resistance, surface smoothness and food safety (cleanability). Another aim of this study was to test new temperature-resistant filaments and the effect of high temperature on 3D printed materials to see if they can be cleaned with boiling or similar hi-temp treatment. This work proved that all three mentioned methods could improve the food safety of the 3D printed object, but the size of this effect variates. The best result we got was with coating with epoxy resin, and the object was cleanable like any other injection molded plastic object with a smooth surface. Very good results we got by boiling the objects, and it is good to see that nowadays, more and more special filaments have a food-safe certificate and can withstand boiling temperatures too. Using antibacterial filaments reduced bacterial colonies to 1/5, but the biggest advantage of this method is that it doesn’t require any post-processing. The object is ready out of the 3D printer. Acknowledgements: The research was supported by the Hungarian and Serbian bilateral scientific and technological cooperation project funded by the Hungarian National Office for Research, Development and Innovation (NKFI, 2019-2.1.11-TÉT-2020-00249) and the Ministry of Education, Science and Technological Development of the Republic of Serbia. The authors acknowledge the Hungarian University of Agriculture and Life Sciences’s Doctoral School of Food Science for the support in this study

Keywords: food safety, 3D printing, filaments, microbial, temperature

Procedia PDF Downloads 142
233 Post Harvest Fungi Diversity and Level of Aflatoxin Contamination in Stored Maize: Cases of Kitui, Nakuru and Trans-Nzoia Counties in Kenya

Authors: Gachara Grace, Kebira Anthony, Harvey Jagger, Wainaina James

Abstract:

Aflatoxin contamination of maize in Africa poses a major threat to food security and the health of many African people. In Kenya, aflatoxin contamination of maize is high due to the environmental, agricultural and socio-economic factors. Many studies have been conducted to understand the scope of the problem, especially at pre-harvest level. This research was carried out to gather scientific information on the fungi population, diversity and aflatoxin level during the post-harvest period. The study was conducted in three geographical locations of; Kitui, Kitale and Nakuru. Samples were collected from storage structures of farmers and transported to the Biosciences eastern and central Africa (BecA), International Livestock and Research Institute (ILRI) hub laboratories. Mycoflora was recovered using the direct plating method. A total of five fungal genera (Aspergillus, Penicillium, Fusarium, Rhizopus and Bssyochlamys spp.) were isolated from the stored maize samples. The most common fungal species that were isolated from the three study sites included A. flavus at 82.03% followed by A.niger and F.solani at 49% and 26% respectively. The aflatoxin producing fungi A. flavus was recovered in 82.03% of the samples. Aflatoxin levels were analysed on both the maize samples and in vitro. Most of the A. flavus isolates recorded a high level of aflatoxin when they were analysed for presence of aflatoxin B1 using ELISA. In Kitui, all the samples (100%) had aflatoxin levels above 10ppb with a total aflatoxin mean of 219.2ppb. In Kitale, only 3 samples (n=39) had their aflatoxin levels less than 10ppb while in Nakuru, the total aflatoxin mean level of this region was 239.7ppb. When individual samples were analysed using Vicam fluorometer method, aflatoxin analysis revealed that most of the samples (58.4%) had been contaminated. The means were significantly different (p=0.00<0.05) in all the three locations. Genetic relationships of A. flavus isolates were determined using 13 Simple Sequence Repeats (SSRs) markers. The results were used to generate a phylogenetic tree using DARwin5 software program. A total of 5 distinct clusters were revealed among the genotypes. The isolates appeared to cluster separately according to the geographical locations. Principal Coordinates Analysis (PCoA) of the genetic distances among the 91 A. flavus isolates explained over 50.3% of the total variation when two coordinates were used to cluster the isolates. Analysis of Molecular Variance (AMOVA) showed a high variation of 87% within populations and 13% among populations. This research has shown that A. flavus is the main fungal species infecting maize grains in Kenya. The influence of aflatoxins on human populations in Kenya demonstrates a clear need for tools to manage contamination of locally produced maize. Food basket surveys for aflatoxin contamination should be conducted on a regular basis. This would assist in obtaining reliable data on aflatoxin incidence in different food crops. This would go a long way in defining control strategies for this menace.

Keywords: aflatoxin, Aspergillus flavus, genotyping, Kenya

Procedia PDF Downloads 277
232 Fake News Domination and Threats on Democratic Systems

Authors: Laura Irimies, Cosmin Irimies

Abstract:

The public space all over the world is currently confronted with the aggressive assault of fake news that have lately impacted public agenda setting, collective decisions and social attitudes. Top leaders constantly call out most mainstream news as “fake news” and the public opinion get more confused. "Fake news" are generally defined as false, often sensational, information disseminated under the guise of news reporting and has been declared the word of the year 2017 by Collins Dictionary and it also has been one of the most debated socio-political topics of recent years. Websites which, deliberately or not, publish misleading information are often shared on social media where they essentially increase their reach and influence. According to international reports, the exposure to fake news is an undeniable reality all over the world as the exposure to completely invented information goes up to the 31 percent in the US, and it is even bigger in Eastern Europe countries, such as Hungary (42%) and Romania (38%) or in Mediterranean countries, such as Greece (44%) or Turkey (49%), and lower in Northern and Western Europe countries – Germany (9%), Denmark (9%) or Holland (10%). While the study of fake news (mechanism and effects) is still in its infancy, it has become truly relevant as the phenomenon seems to have a growing impact on democratic systems. Studies conducted by the European Commission show that 83% of respondents out of a total of 26,576 interviewees consider the existence of news that misrepresent reality as a threat for democracy. Studies recently conducted at Arizona State University show that people with higher education can more easily spot fake headlines, but over 30 percent of them can still be trapped by fake information. If we were to refer only to some of the most recent situations in Romania, fake news issues and hidden agenda suspicions related to the massive and extremely violent public demonstrations held on August 10th, 2018 with a strong participation of the Romanian diaspora have been massively reflected by the international media and generated serious debates within the European Commission. Considering the above framework, the study raises four main research questions: 1. Is fake news a problem or just a natural consequence of mainstream media decline and the abundance of sources of information? 2. What are the implications for democracy? 3. Can fake news be controlled without restricting fundamental human rights? 4. How could the public be properly educated to detect fake news? The research uses mostly qualitative but also quantitative methods, content analysis of studies, websites and media content, official reports and interviews. The study will prove the real threat fake news represent and also the need for proper media literacy education and will draw basic guidelines for developing a new and essential skill: that of detecting fake in news in a society overwhelmed by sources of information that constantly roll massive amounts of information increasing the risk of misinformation and leading to inadequate public decisions that could affect democratic stability.

Keywords: agenda setting democracy, fake news, journalism, media literacy

Procedia PDF Downloads 130
231 The Derivation of a Four-Strain Optimized Mohr's Circle for Use in Experimental Reinforced Concrete Research

Authors: Edvard P. G. Bruun

Abstract:

One of the best ways of improving our understanding of reinforced concrete is through large-scale experimental testing. The gathered information is critical in making inferences about structural mechanics and deriving the mathematical models that are the basis for finite element analysis programs and design codes. An effective way of measuring the strains across a region of a specimen is by using a system of surface mounted Linear Variable Differential Transformers (LVDTs). While a single LVDT can only measure the linear strain in one direction, by combining several measurements at known angles a Mohr’s circle of strain can be derived for the whole region under investigation. This paper presents a method that can be used by researchers, which improves the accuracy and removes experimental bias in the calculation of the Mohr’s circle, using four rather than three independent strain measurements. Obtaining high quality strain data is essential, since knowing the angular deviation (shear strain) and the angle of principal strain in the region are important properties in characterizing the governing structural mechanics. For example, the Modified Compression Field Theory (MCFT) developed at the University of Toronto, is a rotating crack model that requires knowing the direction of the principal stress and strain, and then calculates the average secant stiffness in this direction. But since LVDTs can only measure average strains across a plane (i.e., between discrete points), localized cracking and spalling that typically occur in reinforced concrete, can lead to unrealistic results. To build in redundancy and improve the quality of the data gathered, the typical experimental setup for a large-scale shell specimen has four independent directions (X, Y, H, and V) that are instrumented. The question now becomes, which three should be used? The most common approach is to simply discard one of the measurements. The problem is that this can produce drastically different answers, depending on the three strain values that are chosen. To overcome this experimental bias, and to avoid simply discarding valuable data, a more rigorous approach would be to somehow make use of all four measurements. This paper presents the derivation of a method to draw what is effectively a Mohr’s circle of 'best-fit', which optimizes the circle by using all four independent strain values. The four-strain optimized Mohr’s circle approach has been utilized to process data from recent large-scale shell tests at the University of Toronto (Ruggiero, Proestos, and Bruun), where analysis of the test data has shown that the traditional three-strain method can lead to widely different results. This paper presents the derivation of the method and shows its application in the context of two reinforced concrete shells tested in pure torsion. In general, the constitutive models and relationships that characterize reinforced concrete are only as good as the experimental data that is gathered – ensuring that a rigorous and unbiased approach exists for calculating the Mohr’s circle of strain during an experiment, is of utmost importance to the structural research community.

Keywords: reinforced concrete, shell tests, Mohr’s circle, experimental research

Procedia PDF Downloads 235
230 The Chinese Inland-Coastal Inequality: The Role of Human Capital and the Crisis Watershed

Authors: Iacopo Odoardi, Emanuele Felice, Dario D'Ingiullo

Abstract:

We investigate the role of human capital in the Chinese inland-coastal inequality and how the consequences of the 2007-2008 crisis may induce China to refocus its development path on human capital. We compare panel data analyses for two periods for the richer/coastal and the relatively poor/inland provinces. Considering the rapid evolution of the Chinese economy and the changes forced by the international crisis, we wonder if these events can lead to rethinking local development paths, fostering greater attention on the diffusion of higher education. We expect that the consequences on human capital may, in turn, have consequences on the inland/coastal dualism. The focus on human capital is due to the fact that the growing differences between inland and coastal areas can be explained by the different local endowments. In this respect, human capital may play a major role and should be thoroughly investigated. To assess the extent to which human capital has an effect on economic growth, we consider a fixed-effects model where differences among the provinces are considered parametric shifts in the regression equation. Data refer to the 31 Chinese provinces for the periods 1998-2008 and 2009-2017. Our dependent variable is the annual variation of the provincial gross domestic product (GDP) at the prices of the previous year. Among our regressors, we include two proxies of advanced human capital and other known factors affecting economic development. We are aware of the problem of conceptual endogeneity of variables related to human capital with respect to GDP; we adopt an instrumental variable approach (two-stage least squares) to avoid inconsistent estimates. Our results suggest that the economic strengths that influenced the Chinese take-off and the dualism are confirmed in the first period. These results gain relevance in comparison with the second period. An evolution in local economic endowments is taking place: first, although human capital can have a positive effect on all provinces after the crisis, not all types of advanced education have a direct economic effect; second, the development path of the inland area is changing, with an evolution towards more productive sectors which can favor higher returns to human capital. New strengths (e.g., advanced education, transport infrastructures) could be useful to foster development paths of inland-coastal desirable convergence, especially by favoring the poorer provinces. Our findings suggest that in all provinces, human capital can be useful to promote convergence in growth paths, even if investments in tertiary education seem to have a negative role, most likely due to the inability to exploit the skills of highly educated workers. Furthermore, we observe important changes in the economic characteristics of the less developed internal provinces. These findings suggest an evolution towards more productive economic sectors, a greater ability to exploit both investments in fixed capital and the available infrastructures. All these aspects, if connected with the improvement in the returns to human capital (at least at the secondary level), lead us to assume a better reaction (i.e., resilience) of the less developed provinces to the crisis effects.

Keywords: human capital, inland-coastal inequality, Great Recession, China

Procedia PDF Downloads 205
229 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality

Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo

Abstract:

Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.

Keywords: linear model, models and modeling, probability, randomness, sample

Procedia PDF Downloads 118
228 A Strategic Approach in Utilising Limited Resources to Achieve High Organisational Performance

Authors: Collen Tebogo Masilo, Erik Schmikl

Abstract:

The demand for the DataMiner product by customers has presented a great challenge for the vendor in Skyline Communications in deploying its limited resources in the form of human resources, financial resources, and office space, to achieve high organisational performance in all its international operations. The rapid growth of the organisation has been unable to efficiently support its existing customers across the globe, and provide services to new customers, due to the limited number of approximately one hundred employees in its employ. The combined descriptive and explanatory case study research methods were selected as research design, making use of a survey questionnaire which was distributed to a sample of 100 respondents. A sample return of 89 respondents was achieved. The sampling method employed was non-probability sampling, using the convenient sampling method. Frequency analysis and correlation between the subscales (the four themes) were used for statistical analysis to interpret the data. The investigation was conducted into mechanisms that can be deployed to balance the high demand for products and the limited production capacity of the company’s Belgian operations across four aspects: demand management strategies, capacity management strategies, communication methods that can be used to align a sales management department, and reward systems in use to improve employee performance. The conclusions derived from the theme ‘demand management strategies’ are that the company is fully aware of the future market demand for its products. However, there seems to be no evidence that there is proper demand forecasting conducted within the organisation. The conclusions derived from the theme 'capacity management strategies' are that employees always have a lot of work to complete during office hours, and, also, employees seem to need help from colleagues with urgent tasks. This indicates that employees often work on unplanned tasks and multiple projects. Conclusions derived from the theme 'communication methods used to align sales management department with operations' are that communication is not good throughout the organisation. This means that information often stays with management, and does not reach non-management employees. This also means that there is a lack of smooth synergy as expected and a lack of good communication between the sales department and the projects office. This has a direct impact on the delivery of projects to customers by the operations department. The conclusions derived from the theme ‘employee reward systems’ are that employees are motivated, and feel that they add value in their current functions. There are currently no measures in place to identify unhappy employees, and there are also no proper reward systems in place which are linked to a performance management system. The research has made a contribution to the body of research by exploring the impact of the four sub-variables and their interaction on the challenges of organisational productivity, in particular where an organisation experiences a capacity problem during its growth stage during tough economic conditions. Recommendations were made which, if implemented by management, could further enhance the organisation’s sustained competitive operations.

Keywords: high demand for products, high organisational performance, limited production capacity, limited resources

Procedia PDF Downloads 143
227 Floating Building Potential for Adaptation to Rising Sea Levels: Development of a Performance Based Building Design Framework

Authors: Livia Calcagni

Abstract:

Most of the largest cities in the world are located in areas that are vulnerable to coastal erosion and flooding, both linked to climate change and rising sea levels (RSL). Nevertheless, more and more people are moving to these vulnerable areas as cities keep growing. Architects, engineers and policy makers are called to rethink the way we live and to provide timely and adequate responses not only by investigating measures to improve the urban fabric, but also by developing strategies capable of planning change, exploring unusual and resilient frontiers of living, such as floating architecture. Since the beginning of the 21st century we have seen a dynamic growth of water-based architecture. At the same time, the shortage of land available for urban development also led to reclaim the seabed or to build floating structures. In light of these considerations, time is ripe to consider floating architecture not only as a full-fledged building typology but especially as a full-fledged adaptation solution for RSL. Currently, there is no global international legal framework for urban development on water and there is no structured performance based building design (PBBD) approach for floating architecture in most countries, let alone national regulatory systems. Thus, the research intends to identify the technological, morphological, functional, economic, managerial requirements that must be considered in a the development of the PBBD framework conceived as a meta-design tool. As it is expected that floating urban development is mostly likely to take place as extension of coastal areas, the needs and design criteria are definitely more similar to those of the urban environment than of the offshore industry. Therefor, the identification and categorization of parameters takes the urban-architectural guidelines and regulations as the starting point, taking the missing aspects, such as hydrodynamics, from the offshore and shipping regulatory frameworks. This study is carried out through an evidence-based assessment of performance guidelines and regulatory systems that are effective in different countries around the world addressing on-land and on-water architecture as well as offshore and shipping industries. It involves evidence-based research and logical argumentation methods. Overall, this paper highlights how inhabiting water is not only a viable response to the problem of RSL, thus a resilient frontier for urban development, but also a response to energy insecurity, clean water and food shortages, environmental concerns and urbanization, in line with Blue Economy principles and the Agenda 2030. Moreover, the discipline of architecture is presented as a fertile field for investigating solutions to cope with climate change and its effects on life safety and quality. Future research involves the development of a decision support system as an information tool to guide the user through the decision-making process, emphasizing the logical interaction between the different potential choices, based on the PBBD.

Keywords: adaptation measures, floating architecture, performance based building design, resilient architecture, rising sea levels

Procedia PDF Downloads 86
226 Positive Incentives to Reduce Private Car Use: A Theory-Based Critical Analysis

Authors: Rafael Alexandre Dos Reis

Abstract:

Research has shown a substantial increase in the participation of Conventionally Fuelled Vehicles (CFVs) in the urban transport modal split. The reasons for this unsustainable reality are multiple, from economic interventions to individual behaviour. The development and delivery of positive incentives for the adoption of more environmental-friendly modes of transport is an emerging strategy to help in tackling the problem of excessive use of conventionally fuelled vehicles. The efficiency of this approach, like other information-based schemes, can benefit from the knowledge of their potential impacts in theoretical constructs of multiple behaviour change theories. The goal of this research is to critically analyse theories of behaviour that are relevant to transport research and the impacts of positive incentives on the theoretical determinants of behaviour, strengthening the current body of evidence about the benefits of this approach. The main method to investigate this will involve a literature review on two main topics: the current theories of behaviour that have empirical support in transport research and the past or ongoing positive incentives programs that had an impact on car use reduction. The reviewed programs of positive incentives were the following: The TravelSmart®; Spitsmijden®; Incentives for Singapore Commuters® (INSINC); COMMUTEGREENER®; MOVESMARTER®; STREETLIFE®; SUPERHUB®; SUNSET® and the EMPOWER® project. The theories analysed were the heory of Planned Behaviour (TPB); The Norm Activation Theory (NAM); Social Learning Theory (SLT); The Theory of Interpersonal Behaviour (TIB); The Goal-Setting Theory (GST) and The Value-Belief-Norm Theory (VBN). After the revisions of the theoretical constructs of each of the theories and their influence on car use, it can be concluded that positive incentives schemes impact on behaviour change in the following manners: -Changing individual’s attitudes through informational incentives; -Increasing feelings of moral obligations to reduce the use of CFVs; -Increase the perceived social pressure to engage in more sustainable mobility behaviours through the use of comparison mechanisms in social media, for example; -Increase the perceived control of behaviour through informational incentives and training incentives; -Increasing personal norms with reinforcing information; -Providing tools for self-monitoring and self-evaluation; -Providing real experiences in alternative modes to the car; -Making the observation of others’ car use reduction possible; -Informing about consequences of behaviour and emphasizing the individual’s responsibility with society and the environment; -Increasing the perception of the consequences of car use to an individual’s valued objects; -Increasing the perceived ability to reduce threats to environment; -Help establishing goals to reduce car use; - iving personalized feedback on the goal; -Increase feelings of commitment to the goal; -Reducing the perceived complexity of the use of alternatives to the car. It is notable that the emerging technique of delivering positive incentives are systematically connected to causal determinants of travel behaviour. The preliminary results of the reviewed programs evidence how positive incentives might strengthen these determinants and help in the process of behaviour change.

Keywords: positive incentives, private car use reduction, sustainable behaviour, voluntary travel behaviour change

Procedia PDF Downloads 339
225 Targeting Apoptosis by Novel Adamantane Analogs as an Emerging Therapy for the Treatment of Hepatocellular Carcinoma Through EGFR, Bcl-2/BAX Cascade

Authors: Hanan M. Hassan, Laila Abouzeid, Lamya H. Al-Wahaibi, George S. G. Shehatou, Ali A. El-Emam

Abstract:

Cancer is a major public health problem and the second leading cause of death worldwide. In 2020, cancer diagnosis and treatment have been negatively affected by the coronavirus 2019 (COVID-19) pandemic. During the quarantine, because of the limited access to healthcare and avoiding exposure to COVID-19 as a contagious disease; patients of cancer suffered deferments in follow-up and treatment regimens leading to substantial worsening of disease, death, and increased healthcare costs. Thus, this study is designed to investigate the molecular mechanisms by which adamantne derivatives attenuate hepatocllular carcinoma experimentally and theoretically. There is a close association between increased resistance to anticancer drugs and defective apoptosis that considered a causative factor for oncogenesis. Cancer cells use different molecular pathways to inhibit apoptosis, BAX and Bcl-2 proteins have essential roles in the progression or inhibition of intrinsic apoptotic pathways triggered by mitochondrial dysfunction. Therefore, their balance ratio can promote the cellular apoptotic fate. In this study, the in vitro cytotoxic effects of seven synthetic adamantyl isothiorea derivatives were evaluated against five human tumor cell lines by MTT assay. Compounds 5 and 6 showed the best results, mostly against hepatocellular carcinoma (HCC). Hence, in vivo studies were performed in male Sprague-Dawley (SD) rats in which experimental hepatocellular carcinoma was induced with thioacetamide (TAA) (200 mg/kg, i.p., twice weekly) for 16 weeks. The most promising compounds, 5 and 6, were administered to treat liver cancer rats at a dose of 10 mg/kg/day for an additional two weeks, and the effects were compared with doxorubicin (DR), the anticancer drug. Hepatocellular carcinoma was evidenced by a dramatic increase in liver indices, oxidative stress markers, and immunohistochemical studies that were accompanied by a plethora of inflammatory mediators and alterations in the apoptotic cascade. Our results showed that treatment with adamantane derivatives 5 and 6 significantly suppressed fibrosis, inflammation, and other histopathological insults resulting in the diminished formation of hepatocyte tumorigenesis. Moreover, administration of the tested compounds resulted in amelioration of EGFR protein expression, upregulation of BAX, and lessening down of Bcl-2 levels that prove their role as apoptosis inducers. Also, the docking simulations performed for adamantane showed good fit and binding to the EGFR protein through hydrogen bond formation with conservative amino acids, which gives a shred of strong evidence for its hepatoprotective effect. In most analyses, the effects of compound 6 were more comparable to DR than compound 5. Our findings suggest that adamantane derivatives 5 and 6 are shown to have cytotoxic activity against HCC in vitro and in vivo, by more than one mechanism, possibly by inhibiting the TLR4-MyD88-NF-κB pathway and targeting EGFR signaling.

Keywords: adamantane, EGFR, HCC, apoptosis

Procedia PDF Downloads 146